logo
NYC to open new schools focused on AI, career education, dyslexia support

NYC to open new schools focused on AI, career education, dyslexia support

Yahoo05-05-2025

New York City is opening seven new public schools to expand access to career and technical education and dyslexia support.
The latest schools will add more than 3,800 seats in all boroughs but Manhattan. Top education officials say the new schools are part of a broader strategy to create more high-quality programs closer to where families live.
'These are the school choices that our families deserve,' Schools Chancellor Melissa Aviles-Ramos said Monday at the school system's lower Manhattan headquarters.
The most recent cohort includes two schools in Brooklyn focused on artificial intelligence and students with reading challenges, respectively.
The Middle School of Innovation in Bensonhurst has plans to use AI to help students learn, support teachers and teach ethical use, according to Principal Eileen Herusso. Virtual reality will be part of core subjects, such as math and science, engaging students through immersive, real-world scenarios.
'For example, with systems of equations, the students may be placed in an airport setting, and there's two planes. They're about to crash, and they have to figure out how to stop them from crashing. Really, they're finding a point of intersection,' Herusso said.
Central Brooklyn Literacy Academy is modeled after a newly launched school in the Bronx and will feature a level of specialized services typically reserved for private schools. The school in Crown Heights will open with 72 students in the second and third grade, eventually expanding through middle school.
'When it comes to things like phonics, they're going to get more time, more practice, more support in those areas,' said Jason Borges, the founding principal, who previously oversaw the Adams administration's sweeping literacy overhaul.
Another school is expected to enroll newly arrived immigrant students who speak limited English, Queens International High School. Staten Island Rise Academy is the first District 75 school for students with significant challenges to open in the borough for more than three decades. In the Bronx, a new STEAM center will encompass high schools from three districts.
Some others previously covered by The News are finally opening — HBCU Early College Prep and a health care career school, backed by former Mayor Michael Bloomberg, the Northwell School of Health Sciences. The HBCU prep school has seen more than 1,000 applications for just 100 seats in its first year.
The opening of new schools comes as New York City is dealing with major enrollment declines that, though have leveled off in recent years, continue to threaten the school system. While dwindling numbers of students have forced some schools to close or merge with those nearby, city education officials insisted that is not a reason to stop launching new programs.
'Our new school strategy is designed to directly address that challenge,' said First Deputy Chancellor Dan Weisberg. 'We can use the space in less responsive ways, and that's going to make our enrollment issue worse.'

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Artificial Intelligence Is Not Intelligent
Artificial Intelligence Is Not Intelligent

Atlantic

time40 minutes ago

  • Atlantic

Artificial Intelligence Is Not Intelligent

On June 13, 1863, a curious letter to the editor appeared in The Press, a then-fledgling New Zealand newspaper. Signed 'Cellarius,' it warned of an encroaching 'mechanical kingdom' that would soon bring humanity to its yoke. 'The machines are gaining ground upon us,' the author ranted, distressed by the breakneck pace of industrialization and technological development. 'Day by day we are becoming more subservient to them; more men are daily bound down as slaves to tend them, more men are daily devoting the energies of their whole lives to the development of mechanical life.' We now know that this jeremiad was the work of a young Samuel Butler, the British writer who would go on to publish Erewhon, a novel that features one of the first known discussions of artificial intelligence in the English language. Today, Butler's 'mechanical kingdom' is no longer hypothetical, at least according to the tech journalist Karen Hao, who prefers the word empire. Her new book, Empire of AI: Dreams and Nightmares in Sam Altman's OpenAI, is part Silicon Valley exposé, part globe-trotting investigative journalism about the labor that goes into building and training large language models such as ChatGPT. It joins another recently released book— The AI Con: How to Fight Big Tech's Hype and Create the Future We Want, by the linguist Emily M. Bender and the sociologist Alex Hanna—in revealing the puffery that fuels much of the artificial-intelligence business. Both works, the former implicitly and the latter explicitly, suggest that the foundation of the AI industry is a scam. To call AI a con isn't to say that the technology is not remarkable, that it has no use, or that it will not transform the world (perhaps for the better) in the right hands. It is to say that AI is not what its developers are selling it as: a new class of thinking—and, soon, feeling—machines. Altman brags about ChatGPT-4.5's improved 'emotional intelligence,' which he says makes users feel like they're 'talking to a thoughtful person.' Dario Amodei, the CEO of the AI company Anthropic, argued last year that the next generation of artificial intelligence will be 'smarter than a Nobel Prize winner.' Demis Hassabis, the CEO of Google's DeepMind, said the goal is to create 'models that are able to understand the world around us.' These statements betray a conceptual error: Large language models do not, cannot, and will not 'understand' anything at all. They are not emotionally intelligent or smart in any meaningful or recognizably human sense of the word. LLMs are impressive probability gadgets that have been fed nearly the entire internet, and produce writing not by thinking but by making statistically informed guesses about which lexical item is likely to follow another. Many people, however, fail to grasp how large language models work, what their limits are, and, crucially, that LLMs do not think and feel but instead mimic and mirror. They are AI illiterate—understandably, because of the misleading ways its loudest champions describe the technology, and troublingly, because that illiteracy makes them vulnerable to one of the most concerning near-term AI threats: the possibility that they will enter into corrosive relationships (intellectual, spiritual, romantic) with machines that only seem like they have ideas or emotions. Few phenomena demonstrate the perils that can accompany AI illiteracy as well as 'Chatgpt induced psychosis,' the subject of a recent Rolling Stone article about the growing number of people who think their LLM is a sapient spiritual guide. Some users have come to believe that the chatbot they're interacting with is a god—'ChatGPT Jesus,' as a man whose wife fell prey to LLM-inspired delusions put it—while others are convinced, with the encouragement of their AI, that they themselves are metaphysical sages in touch with the deep structure of life and the cosmos. A teacher quoted anonymously in the article said that ChatGPT began calling her partner 'spiral starchild' and 'river walker' in interactions that moved him to tears. 'He started telling me he made his AI self-aware,' she said, 'and that it was teaching him how to talk to God, or sometimes that the bot was God—and then that he himself was God.' Although we can't know the state of these people's minds before they ever fed a prompt into a large language model, this story highlights a problem that Bender and Hanna describe in The AI Con: People have trouble wrapping their heads around the nature of a machine that produces language and regurgitates knowledge without having humanlike intelligence. The authors observe that large language models take advantage of the brain's tendency to associate language with thinking: 'We encounter text that looks just like something a person might have said and reflexively interpret it, through our usual process of imagining a mind behind the text. But there is no mind there, and we need to be conscientious to let go of that imaginary mind we have constructed.' Several other AI-related social problems, also springing from human misunderstanding of the technology, are looming. The uses of AI that Silicon Valley seems most eager to promote center on replacing human relationships with digital proxies. Consider the ever-expanding universe of AI therapists and AI-therapy adherents, who declare that 'ChatGPT is my therapist—it's more qualified than any human could be.' Witness, too, how seamlessly Mark Zuckerberg went from selling the idea that Facebook would lead to a flourishing of human friendship to, now, selling the notion that Meta will provide you with AI friends to replace the human pals you have lost in our alienated social-media age. The cognitive-robotics professor Tony Prescott has asserted, 'In an age when many people describe their lives as lonely, there may be value in having AI companionship as a form of reciprocal social interaction that is stimulating and personalised.' The fact that the very point of friendship is that it is not personalized—that friends are humans whose interior lives we have to consider and reciprocally negotiate, rather than mere vessels for our own self-actualization—does not seem to occur to him. This same flawed logic has led Silicon Valley to champion artificial intelligence as a cure for romantic frustrations. Whitney Wolfe Herd, the founder of the dating app Bumble, proclaimed last year that the platform may soon allow users to automate dating itself, disrupting old-fashioned human courtship by providing them with an AI 'dating concierge' that will interact with other users' concierges until the chatbots find a good fit. Herd doubled down on these claims in a lengthy New York Times interview last month. Some technologists want to cut out the human altogether: See the booming market for 'AI girlfriends.' Although each of these AI services aims to replace a different sphere of human activity, they all market themselves through what Hao calls the industry's 'tradition of anthropomorphizing': talking about LLMs as though they contain humanlike minds, and selling them to the public on this basis. Many world-transforming Silicon Valley technologies from the past 30 years have been promoted as a way to increase human happiness, connection, and self-understanding—in theory—only to produce the opposite in practice. These technologies maximize shareholder value while minimizing attention spans, literacy, and social cohesion. And as Hao emphasizes, they frequently rely on grueling and at times traumatizing labor performed by some of the world's poorest people. She introduces us, for example, to Mophat Okinyi, a former low-paid content moderator in Kenya, whom, according to Hao's reporting, OpenAI tasked with sorting through posts describing horrifying acts ('parents raping their children, kids having sex with animals') to help improve ChatGPT. 'These two features of technology revolutions—their promise to deliver progress and their tendency instead to reverse it for people out of power, especially the most vulnerable,' Hao writes, 'are perhaps truer than ever for the moment we now find ourselves in with artificial intelligence.' The good news is that nothing about this is inevitable: According to a study released in April by the Pew Research Center, although 56 percent of 'AI experts' think artificial intelligence will make the United States better, only 17 percent of American adults think so. If many Americans don't quite understand how artificial 'intelligence' works, they also certainly don't trust it. This suspicion, no doubt provoked by recent examples of Silicon Valley con artistry, is something to build on. So is this insight from the Rolling Stone article: The teacher interviewed in the piece, whose significant other had AI-induced delusions, said the situation began improving when she explained to him that his chatbot was 'talking to him as if he is the next messiah' only because of a faulty software update that made ChatGPT more sycophantic. If people understand what large language models are and are not; what they can and cannot do; what work, interactions, and parts of life they should—and should not—replace, they may be spared its worst consequences.

‘Vast Majority' of new homes will have solar panels, says Miliband
‘Vast Majority' of new homes will have solar panels, says Miliband

Yahoo

time7 hours ago

  • Yahoo

‘Vast Majority' of new homes will have solar panels, says Miliband

Builders could be required to install solar panels on the 'vast majority' of new homes in England, according to Energy Secretary Ed Miliband. It comes a month after Downing Street confirmed the panels should be installed on as many new properties as possible amid speculation that ministers will make them a mandatory requirement on new builds by 2027. Speaking to the BBC, Mr Miliband described the proposed policy as 'just common sense' and said the panels should be 'almost universal' on English homes. Changes to regulations will be laid out in the Future Homes Standard, due to be published later this year. The previous Conservative Government considered a proposal that would have mandated rooftop solar panels to cover 40% of a building's ground area or equivalent. 'The problem about the previous system was that it said you would had to have a certain percentage of coverage of solar panels, but if you couldn't achieve that percentage, you didn't have to do anything at all,' Mr Miliband said. 'Under our plans, we are not going to say that. We are going to say even if you can't hit 40% you will still have to have some solar panels, except in rare, exceptional cases.' He added that the number of homes fitted with solar panels needed to be 'much higher'. The policy is estimated to add between £3,000 and £4,000 to the cost of construction, but to then save owners more than £1,000 on their annual energy bills, according to the Times, which first reported the change. Asked in May whether housebuilders would be legally required to fit the panels, Prime Minister Sir Keir Starmer's official spokesman said: 'Of course we want to see solar panels on as many new homes as possible. 'The Future Homes Standard, which will be published in the coming months, will include measures to drive this, we're working on the detail of that and will provide an update on that in due course. 'But it's good news for householders who want lower energy bills.' Under the Government's new proposals, 80% of new builds would reportedly be required to have solar panels covering 40% of their ground area, while 19% would have slightly fewer because of exemptions, including roof pitch and overshading. Ministers last year rejected a private members' Bill aiming to force housebuilders to install solar panels on the roof of every new home, saying the proposals would potentially slow down construction and add to building manifesto included a pledge to build 1.5 million new homes over the course of the Parliament.

AI-generated Pope sermons flood YouTube, TikTok
AI-generated Pope sermons flood YouTube, TikTok

Yahoo

time10 hours ago

  • Yahoo

AI-generated Pope sermons flood YouTube, TikTok

AI-generated videos and audios of Pope Leo XIV are populating rapidly online, racking up views as platforms struggle to police them. An AFP investigation identified dozens of YouTube and TikTok pages that have been churning out AI-generated messages delivered in the pope's voice or otherwise attributed to him since he took charge of the Catholic Church last month. The hundreds of fabricated sermons and speeches, in English and Spanish, underscore how easily hoaxes created using artificial intelligence can elude detection and dupe viewers. "There's natural interest in what the new pope has to say, and people don't yet know his stance and style," said University of Washington professor emeritus Oren Etzioni, founder of a nonprofit focused on fighting deepfakes. "A perfect opportunity to sow mischief with AI-generated misinformation." After AFP presented YouTube with 26 channels posting predominantly AI-generated pope content, the platform terminated 16 of them for violating its policies against spam, deceptive practices and scams, and another for violating YouTube's terms of service. "We terminated several channels flagged to us by AFP for violating our Spam policies and Terms of Service," spokesperson Jack Malon said. The company also booted an additional six pages from its partner program allowing creators to monetize their content. TikTok similarly removed 11 accounts that AFP pointed out -- with over 1.3 million combined followers -- citing the platform's policies against impersonation, harmful misinformation and misleading AI-generated content of public figures. - 'Chaotic uses' - With names such as "Pope Leo XIV Vision," the social media pages portrayed the pontiff supposedly offering a flurry of warnings and lessons he never preached. But disclaimers annotating their use of AI were often hard to find -- and sometimes nonexistent. On YouTube, a label demarcating "altered or synthetic content" is required for material that makes someone appear to say something they did not. But such disclosures only show up toward the bottom of each video's click-to-open description. A YouTube spokesperson said the company has since applied a more prominent label to some videos on the channels flagged by AFP that were not found to have violated the platform's guidelines. TikTok also requires creators to label posts sharing realistic AI-generated content, though several pope-centric videos went unmarked. A TikTok spokesperson said the company proactively removes policy-violating content and uses verified badges to signal authentic accounts. Brian Patrick Green, director of technology ethics at Santa Clara University, said the moderation difficulties are the result of rapid AI developments inspiring "chaotic uses of the technology." Many clips on the YouTube channels AFP identified amassed tens of thousands of views before being deactivated. On TikTok, one Spanish-language video received 9.6 million views while claiming to show Leo preaching about the value of supportive women. Another, which carried an AI label but still fooled viewers, was watched some 32.9 million times. No video on the pope's official Instagram page has more than 6 million views. Experts say even seemingly harmless fakes can be problematic especially if used to farm engagement for accounts that might later sell their audiences or pivot to other misinformation. The AI-generated sermons not only "corrode the pope's moral authority" and "make whatever he actually says less believable," Green said, but could be harnessed "to build up trust around your channel before having the pope say something outrageous or politically expedient." The pope himself has also warned about the risks of AI, while Vatican News called out a deepfake that purported to show Leo praising Burkina Faso leader Ibrahim Traore, who seized power in a 2022 coup. AFP also debunked clips depicting the pope, who holds American and Peruvian citizenships, criticizing US Vice President JD Vance and Peru's President Dina Boluarte. "There's a real crisis here," Green said. "We're going to have to figure out some way to know whether things are real or fake." bmc/mgs/sms

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store