
Congressional App Challenge winners announced, apps to be on display in US Capitol Building
The winning app, "Adventure Challenge," was developed by Anderson and Davenport in partnership with Montana Fish, Wildlife and Parks.
In Montana's 2nd Congressional District, U.S. Rep. Matt Rosendale named Code Girls United members Makena Pedersen and Aurora Obie of Helena as winners for their app "Stay Fetch," which they designed to aid animal lovers in finding and adopting the pet that will best suit their lives.
The winning apps are set to be displayed in the U.S. Capitol Building during the National Science Fair's "House of Code" event April 8-9. Both Montana teams are raising funds to attend the event in Washington, D.C.
"This is a wonderful opportunity for these students to represent their home state and showcase their hard work and dedication to solving a community issue. Seeing young girls recognized for their technical accomplishments in a predominantly male field is so important. We are so proud of our students," Code Girls United CEO Marianne Smith said in a statement.
The Congressional App Challenge is a nationwide event that allows middle and high school students to showcase their skills by creating and exhibiting their software applications.
Code Girls United is a nonprofit that offers free after-school programming for girls in grades fourth through eighth and tribal high school girls in Montana. In addition to computer science and basic business skills, girls develop skills in team building, public speaking, presentation and self-confidence.
For more information or to donate, visit www.codegirlsunited.org.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


The Star
a few seconds ago
- The Star
Water tariff hike a double whammy, says Selangor folk
THE Selangor government should reconsider the new water tariff rates for the poor. Concerned city dwellers claim that high water usage was sometimes inevitable due to the number of people in a household and other reasons like maintaining a garden. Kota Bayu Emas Residents Association committee member Brian Raj said a family-of-four would easily use more than 20 cubic metres (m3) a month and be subjected to the steep increase in their water bills next month. Similarly, the 75-year-old said there were only two people in his household but his water usage was much more than 20m3 because he had a garden to maintain. 'I live in a landed property and I have many ornamental plants in my compound. 'This is my hobby. I have to water the plants twice a day, otherwise it will die,' he said. While some can afford the increase, he said the poor may be burdened by it and have no other choice. Former National Water Services Commission commissioner Sarajun Hoda Abdul Hassan from Bandar Parklands, Klang also said the hike would burden the poor. 'Such a steep increase will affect the poor people. 'Especially in areas like Bandar Parkland, where there is a problem of water supply, it was unfair to impose such a hike. 'At least it should be done in phases in tandem with the improvement of the infrastructure,' he said. Former chairman of Section 14 Residents Association Selva Sugumaran Perumal said the water tariff for residential properties should never go up because the increase would be a double whammy for ordinary people. 'The increased cost for other types of properties like commercial and industrial will be passed on to the customers. 'Ordinary folks will need to pay an increased fee for their own homes and absorb the additional cost from businesses,' he said. Meanwhile, a condominium resident from Pandan Indah, Ampang who only wanted to be known as Toh said she was okay with the fee hike. 'Personally, as long as the management corporation distributes the fee fairly among the residents, it will be okay. 'Some people may take things for granted if the value is low,' she said. In a statement on Friday (Aug 1), Selangor Mentri Besar Datuk Seri Amirudin Shari announced the new rates that would take effect on Sept 1. Households using between 20m3 and 35m3 of water monthly in Selangor, Kuala Lumpur and Putrajaya will now have to pay RM1.62/m3, an increase of 30 sen. Homes using more than 35m3 of water monthly, which is equivalent to 35,000 litres, will now have to pay RM3.51/m3 for each cubic meter above 35m3, which is an increase of RM0.88/m3. However, homes using up to 20 cubic meters (m3) will not face any increase and would continue paying the existing rate of RM0.65/m3. The minimum charge for domestic users will also remain at RM6.50. Those in condominiums, estates and government quarters will see an increase of RM0.41/m3, with consumers having to pay RM2.09 for each cubic metre. For condominiums, the minimum charge of RM173 per month will remain without any increase, whilst the minimum rate for estates and government quarters will move to RM20.90/month. Places of worship and welfare institutions will see a minimal increase of RM0.10/m3 to RM0.76/m3. Businesses, commercial entities and non-domestic buildings will be charged the same rate as domestic households using more than 35/m3, which is RM3.51/m3 with an increase of RM0.57 for each cubic metre. Non-domestic buildings consuming beyond 35/m3 of water will move up to a rate of RM3.83/m3 for each cubic metre over 35/m3. The shipping industry will have to pay a new rate of RM8.01/m3. The state has also included data centres as a new category, given that they require over a million litres per day to cool the systems in these centres. Data centres will now be charged RM5.31/m3 of water.


The Star
a few seconds ago
- The Star
Should Hong Kong plug legal gaps to stamp out AI-generated porn?
Betrayal was the first thought to cross the mind of a University of Hong Kong (HKU) law student when she found out that a classmate, whom she had considered a friend, had used AI to depict her naked. 'I felt a great sense of betrayal and was traumatised by the friendship and all the collective memories,' said the student, who has asked to be called 'B'. She was not alone. More than 700 deepfake pornographic images were found on her male classmate's computer, including AI-generated pictures of other HKU students. B said she felt 'used' and that her bodily autonomy and dignity had been violated. Angered by what she saw as inaction by HKU in the aftermath of the discovery and the lack of protection under existing city laws, B joined forces with two other female students to set up an Instagram account to publicise the case. HKU said it had issued a warning letter to the male student and told him to make a formal apology. It later said it was reviewing the incident and vowed to take further action, with the privacy watchdog also launching a criminal investigation. But the women said they had no plans to file a police report as the city lacked the right legal 'framework'. The case has exposed gaps in legislation around the non-consensual creation of images generated by artificial intelligence (AI), especially when the content was only used privately and not published to a wider audience. Authorities have also weighed in and pledged to examine regulations in other jurisdictions to decide on the path forward. Hong Kong has no dedicated laws tackling copyright issues connected to AI-generated content or covering the creation of unauthorised intimate images that have not been published. Legal experts told the Post that more rules were needed, while technology sector players urged authorities to avoid stifling innovation through overregulation. The HKU case is not the only one to have put the spotlight on personal information being manipulated or used by AI without consent. In May, a voice that was highly similar to that of an anchor from local broadcaster TVB was found in an advertisement and a video report from another online news outlet. The broadcaster, which said it suspected another news outlet and an advertiser had used AI to create the voice-over, warned that the technology was capable of creating convincing audio of a person's voice without prior authorisation. Separately, a vendor on an e-commerce platform was found to be selling AI-created videos mimicking a TVB news report with lifelike anchors delivering birthday messages. In response to a Post inquiry, TVB's assistant general manager in corporate communications, Bonnie Wong, said the broadcaster had issued a warning to the vendor and alerted the platform to demand the immediate removal of the infringing content. What the law says Lawyers and a legal academic who spoke to the Post pointed to gaps in criminal, privacy and copyright laws regarding the production of intimate images using AI without consent or providing sufficient remedies for victims. Barrister Michelle Wong Lap-yan said prosecutors would have a hard time laying charges using existing offences such as access to a computer with criminal or dishonest intent and publication, or threatening to publish intimate images without consent. Both offences are under the Crimes Ordinance. The barrister, who has experience advising victims of sexual harassment, said the first offence governing computer use only applied when a perpetrator accessed another person's device. 'Hence, if the classmate generated the images using his own computer, the act may not be caught under [the offence],' Wong said. As for the second offence, a perpetrator needed to publish the images or at least threaten to do so. 'Unless the classmate threatened to publish the deepfake images, he would not be caught under this newly enacted legislation [that came into force in 2021],' Wong said. She said that 'publishing' was defined broadly under the Crimes Ordinance to cover situations such as when a person distributed, circulated, made available, sold, gave or lent images to another person. The definition also included showing the content to another person in any manner. Craig Choy Ki, a US-based barrister, told the Post that challenges also existed in using privacy and copyright laws. He said the city's anti-doxxing laws only applied if the creator of the image disclosed the intimate content to other people. Privacy laws could come into play depending on how the creator had collected personal information when generating the images, and whether they had been published without permission to infringe upon the owner's 'moral rights'. The three alleged victims in the HKU case told the Post that a friend of the male student had discovered the AI-generated pornographic images when borrowing his computer in February. 'As far as we know, [the male student] did not disclose the images to his friend himself,' the trio said. They added that the person who made the discovery told them they were among the files. Stuart Hargreaves, an associate professor from the Chinese University of Hong Kong's law faculty, said legislation covering compensation for harm was inadequate for helping victims of non-consensual AI-produced intimate images. In an academic paper, Hargreaves argued that a court might not deem such images, which were known to be fake and fabricated, as able to cause reputational harm if created or published without consent. Suing the creator for deliberately causing emotional distress could also be difficult given the level of harm required and the threshold usually required by a court. 'In the case of targeted synthetic pornography, the intent of the creator is often for private gratification or esteem building in communities that trade the images. Any harm to the victim may often be an afterthought,' Hargreaves wrote. A Post check found that major AI chatbots prohibited the creation of pictures using specific faces of real people or creating sexual content, but free-of-charge platforms specialising in the generation of such content were readily available online. The three female HKU students also told the Post that one of the pieces of software used to produce the more than 700 intimate images found on their classmate's laptop was ' On its website, the software is described as a 'free AI undressing tool that allows users to generate images of girls without clothing'. The provider is quoted as saying: 'There are no laws prohibiting the use of [the] application for personal purposes.' It also describes the software as operating 'within the bounds of current legal frameworks'. Clicking on a button to create a 'deepnude', a sign-in page tells users they must be aged over 18, that they cannot use another person's photo without their permission and that they will be responsible for the images created. But users can simply ignore the messages. The Post found that the firm behind this software, Itai Tech Limited, is under investigation by the UK's communications services regulator for allegedly providing ineffective protection to prevent children from accessing pornographic content. The UK-registered firm is also being sued by the San Francisco City Attorney for operating websites producing non-consensual intimate images of adults. The Post has reached out to Itai Tech for comment. How can deepfake porn be regulated? Responding to an earlier Post inquiry in July, the Innovation, Technology and Industry Bureau said it would continue to monitor the development and application of AI in the city, draw references from elsewhere and 'review the existing legislation' if necessary. Chief Executive John Lee Ka-chiu also pledged to examine global regulatory trends and research international 'best practices' on regulating the emerging technology. Some overseas jurisdictions have introduced offences for deepfake porn that extend beyond the publication of such content. In Australia, the creation and transmission of deepfake pornographic images made without consent was banned last year, with offenders facing up to seven years in jail if they had also created the visuals. South Korea also established offences last year for possessing and viewing deepfake porn. Offenders can be sentenced to three years' imprisonment or fined up to 30 million won (US$21,666). The United Kingdom has floated a proposal to ban the creation of sexually explicit images of adults without their consent, regardless of whether the creator intended to share the content. Perpetrators would have the offence added to their criminal record and face a fine, while those who shared their unauthorised creations could face jail time. The lawyers also noted that authorities would need to lay out how a law banning the non-consensual creation or possession of sexually explicit images could be enforced. Wong, the barrister, said law enforcement and prosecutors should have adequate experience in establishing and implementing the charges with reference to existing legislation targeting child pornography. In such cases, authorities can access an arrested person's electronic devices with their consent or with a court warrant. Is a legal ban the right solution? Hargreaves suggested Hong Kong consider establishing a broad legal framework to target unauthorised sexually explicit deepfake content with both civil and criminal law reforms. He added that criminal law alone was insufficient to tackle such cases, which he expected to become more common in the future. 'It is a complicated problem that has no single silver bullet solution,' Hargreaves said. The professor suggested the introduction by statute of a new right of action based upon misappropriation of personality to cover the creation, distribution or public disclosure of visual or audiovisual material that depicted an identifiable person in a false sexual context without their consent. Its goal would be to cover harm to a victim's dignity and allow them to seek restitution and quicker remedies. United States-based barrister Choy said the city could establish a tiered criminal framework with varying levels of penalties, while also allowing defences such as satire or news reports. He said the law should only apply when an identifiable person was depicted in the content. But Hargreaves and Choy cautioned that any new law would have to strike a balance with freedom of expression, especially for cases in which content had not been published. The lawyers said it would be difficult to draw up a legal boundary between what might be a private expression of sexual thought and protection for people whose information was appropriated without their consent. Hargreaves said films would not be deemed illegal for portraying a historical or even a current figure engaging in sexual activity, adding that a large amount of legitimate art could be sexually explicit without infringing upon the law. But the same could not be said for pornographic deepfake content depicting a known person without their consent. 'Society should express disapproval of the practice of creating sexually explicit images of people without their consent, even if there is no intent that those images be shared,' Hargreaves said. Choy said the city would need to consult extensively across society on the acceptable moral standards to decide which legal steps would be appropriate and to avoid setting up a 'thought crime' for undisclosed private material. 'When we realise there is a moral issue, the use of the law is to set the lowest acceptable bar in society as guidelines for everyone. The discussion should include how the legal framework should be laid out and whether laws should be used,' he said. But penalties might only provide limited satisfaction for victims, as they had potentially suffered emotional trauma that was not easily remedied under the law. The barrister added that the city should also consider if the creator of the unauthorised content should be the sole bearer of legal consequences, as well as whether AI service providers should have some responsibility. 'I think we sometimes overweaponise the law, thinking that it would solve all problems. With AI being such a novel technology which everyone is still figuring out its [moral] boundaries, education is also important,' Choy said. Will laws hurt tech progress? Lawmakers with ties to the technology sector have expressed concerns over possible legislation banning the creation of deepfake pornography without others' consent, saying that the law could not completely stamp out misuse and the city risked stifling a rapidly developing area. Duncan Chiu, a legislator representing the technology sector, likened the potential ban on creating the content to prohibition on possession of pornography. He said a jurisdiction could block access to websites within its remit but was unable to prevent internet users from accessing the same material in another country that permitted the content. 'This situation is also found elsewhere on the internet. One jurisdiction enacts a law, but you can't stop people from accessing the said software in other countries or regions,' Chiu said. The lawmaker said AI platforms had been working together to establish regulations on ethical concerns, such as labelling images with watermarks to identify them as digital creations. He also gave the example of 'hallucinations', which are incorrect or misleading results generated by AI models, saying the city did not need to legislate over the generation of false content. 'Many generative AI programmes are able to reveal sources behind their answers with a click of a button. Hallucinations can be adjusted away too. It would have been wrong to legislate against this back then, as technology would advance past this,' Chiu said. Johnny Ng Kit-chong, a lawmaker who has incubated tech start-ups in the city, said a social consensus on the use of AI was needed before moving to the legislative stage. Ng said that banning the creation of sexually explicit deepfake content produced without consent would leave the burden on technology firms to establish rules on their platforms. 'This might not affect society much as most people would think regulations on sexual images are [inappropriate]. However, to start-ups, they would be limited in their development of artificial intelligence functions,' Ng said. Ng said Hong Kong could reference the EU Artificial Intelligence Act, which classified AI by the amount of risk with corresponding levels of regulation. Some practices, such as using the technology for social scoring, were banned. But the use of AI to generate sexual content was permitted and not listed as a high-risk practice. Yeung Kwong-chak, CEO of start-up DotAI, told the Post that businesses might welcome more clarity on legal boundaries. His firm uses AI to provide corporate services and also offers training courses for individuals and businesses to learn how to harness the technology. 'Large corporations are concerned about risks [associated with using new technologies], such as data leaks or if AI-generated content may offend internal policies. Having laws will give them a sense of certainty in knowing where the boundaries are, encouraging their uptake of the technology,' Yeung said. The three students in the HKU case said they hoped the law would keep up with the times to offer protection to victims, provide a deterrent and outlaw the creation of pornographic deepfake images without consent. 'The risk of the creation of deepfake images does not lie only in 'possession', but more importantly in the acquisition of indecent images without consent, and the potential distribution of images [of which the] authenticity may become more and more difficult to distinguish as AI continues to develop,' they warned. They added that 'permanent and substantial' punishment to creators of these non-consensual images would do the most to ease their concerns. 'We hope the punishment can be sufficient so that he can be constantly, impactfully reminded of his wrongdoing and not commit such an irritating act ever again,' the students said. - SOUTH CHINA MORNING POST


The Star
a few seconds ago
- The Star
Trump administration sends mixed messages on China trade pact
The US government sent mixed messages this week on where the latest trade agreement with China, including a possible extension of the pause on tariff hikes, is headed. Asked by a reporter at the regular press briefing whether an extension of the current pause on import tariffs aimed at each others' products 'was on the table', White House press secretary Karoline Leavitt said, 'I don't think so, but I'll let [Treasury Secretary Scott Bessent] speak on that, because he's leading these negotiations.'