logo
Pittsburgh housing authority recruits AI to help with application backlogs

Pittsburgh housing authority recruits AI to help with application backlogs

Technical.ly27-06-2025
Pittsburgh's housing authority will begin using artificial intelligence to help its understaffed voucher department process thousands of applications from people in need of affordable housing.
On Thursday, the Housing Authority of the City of Pittsburgh (HACP) board approved payment of $160,392 to Boodskapper Inc., a private artificial intelligence company known as Bob.ai. The company offers several services to automate housing authority work, which include reviewing and verifying certification packages from current voucher holders. The effort will begin as a one-year pilot program.
Addressing concerns from the board members and a public speaker about the role the AI would play, HACP Executive Director Caster Binion stressed, 'The AI will not be in charge, not making decisions.'
'This will be used for recertifications specifically,' he added.
Recertifications refer to updates affecting existing voucher holders, such as household income level and number of family members. Households with vouchers pay 30% of their income toward rent and utilities, with the balance covered by the HACP.
The authority's Housing Choice Voucher Program Director Lashawna Hammond told the board the 'AI would scan recertification packets with their income and then it shows us on the back end if the packet is completed. So it's not processing, it's just doing preliminary work.'
Approval of the pilot comes after Binion signaled in a meeting last year with City Council the authority would experiment with AI and other efforts to improve the efficiency of the voucher program. In 2020, PublicSource reported that the housing authority planned to 'enhance the voucher program,' but those efforts were hampered due to staffing shortages, according to the authority's leadership.
Since then, steps have been taken to train new staff, according to Binion, who along with the board's former chair pledged improvements in 2022.
Board member Charlise Smith worried that the program might take on human tendencies to discriminate against certain types of people.
A public commenter at the board meeting echoed Smith's concerns.
'We all agree that increased efficiency is needed within the HCV Program to address administrative delays that limits the use of HCVs,' said Megan Hammond, a housing justice advocate and executive director of the Fair Housing Partnership of Greater Pittsburgh.
'AI is dependent upon the humans who create the AI. As a result, AI is susceptible to the continuation and scaling of fair housing concerns,' Hammond said, noting that the National Fair Housing Alliance created a Responsible AI Symposium that includes a framework for auditing algorithmic bias in these programs.
Hammond called for 'greater transparency about the role of the proposed Bob.ai, including checks and balances for applicants and voucher recipients to dispute inaccurate information.'
She pressed for monthly public reports from the authority on the status of the Housing Choice Voucher waiting list every month along with information on where the authority allocates funds.
Concerns over the use of AI in public housing has also reached the federal level.
The US Department of Housing and Urban Development released guidelines for housing authorities across the country on best practices and uses of this new technology in an attempt to enforce the Fair Housing Act and prevent discrimination in housing.
LaShawna Hammond maintained the program is needed to help with the backlog of recertification applications that threaten to overwhelm her staff.
She noted that the system will be used to improve processing time and reduce clerical and user errors. She said the system will be able to detect missing information in the application and how that would trigger an automatic response to the client, letting them know that their application is incomplete.
She said her department has 13 housing specialists, each averaging a caseload of 500. She said they hired five new housing specialists to start June 30 but it would take some time to train the new hires and bring them up to speed.
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

UBalt is launching an AI center that focuses on access, not just innovation
UBalt is launching an AI center that focuses on access, not just innovation

Technical.ly

time04-08-2025

  • Technical.ly

UBalt is launching an AI center that focuses on access, not just innovation

Baltimoreans now have a new place to learn about and explore AI, thanks to the University of Baltimore. Announced July 28, the Center for AI Learning and Community-Engaged Innovation (CAILI) will serve as a space where AI technology is not only studied, but actively applied to address local needs. CAILI, led by Jessica Stansbury, UBalt's director of teaching and learning excellence and the center's inaugural director, wants to ensure AI doesn't widen the existing digital divide. 'It's not that CAILI is just a place for AI innovation,' Stansbury told 'It's more that CAILI is a place of AI innovation with and for Baltimore.' Stansbury and her colleagues believe CAILI will set itself apart from other university-run centers — like the two-year-old Johns Hopkins Data Science and AI Institute, for example — by prioritizing public understanding and practical engagement over research. 'We're really looking to make sure that people are aware of what AI is, that folks are AI literate, particularly with respect to working professionals and people who are going to need to manage how others are using AI in the workplace,' said Aaron Wachhaus, associate provost at UBalt. Created with collaboration, with goals to boost community impact The center, which will be supported by the provost's office for the first three years, was born out of two years of AI-focused initiatives and research within the university. Initial efforts included 'Learn with Me' sessions for faculty and students exploring tools like ChatGPT. By mid-2023, UBalt had joined a national research initiative led by an education research nonprofit Ithaka S+R, collaborating with other institutions to explore AI in higher education. The university hosted its first AI Summit in June 2024, which brought together students, faculty, local organizations and industry leaders to collectively define 'AI literacy.' The summit became a catalyst for deeper community involvement and helped shape the vision for CAILI. 'We can't be blindly developing curriculum if we don't understand how it's impacting the community and industry,' Stansbury said. 'So we brought everyone to the table.' Those conversations led to the creation of a free AI in Practice webinar series, featuring industry experts discussing the use of AI in higher education. UBalt partnered with JHU and the University of Maryland, Baltimore County on the series. 'I really firmly believe, not only with my faculty background, but my administrative background, that this is a space where we cannot afford to compete,' Stansbury said, underscoring her commitment to cross-institutional partnership. Helping Baltimoreans understand and navigate AI While CAILI doesn't yet have a physical space, it will operate through open-access venues like the university library and travel to partner organizations for lunch and learn workshops. Stansbury is still recruiting partners for the center, but hopes to develop AI literacy workshops at libraries in the future. Stansbury also wants to use the center to address the ethical concerns surrounding AI use. UBalt has already experimented with new educational tools like MoodleBox, a platform that allows students to interact with multiple AI engines. Last fall, students in an AI ethics and philosophy course used the tool to explore generative AI models and learn about biases within the technology. Dean Merritt, a UBalt alum and vice president of sales at Baltimore-based SaaS company Mindgrub, has participated in AI in business panels hosted by the university. He sees UBalt as uniquely positioned to help Baltimoreans understand and navigate AI, given its strong ties to the local community and its focus on serving working adults and nontraditional students. 'The university as a whole has always been very accessible to all levels of education, all generations, those that are working and trying to learn and level up their careers as well as those who are going into it full-time,' Merritt said. 'It's a great place to focus on the real-world application of AI.' Maria Eberhart is a 2025-2026 corps member for Report for America, an initiative of The Groundtruth Project that pairs emerging journalists with local newsrooms. This position is supported in part by the Robert W. Deutsch Foundation and the Abell Foundation. .

As Trump admin promotes AI plan, experts consider next steps for regulation
As Trump admin promotes AI plan, experts consider next steps for regulation

Technical.ly

time28-07-2025

  • Technical.ly

As Trump admin promotes AI plan, experts consider next steps for regulation

As the debate over artificial intelligence regulation intensifies, the divide over how and whether to rein in the technology is becoming increasingly stark. That dilemma was on display yesterday at Broadband Breakfast's latest weekly live virtual session, where a panel of experts debated competing visions for the future of AI. The panel aired the same day the Trump administration announced its AI action plan, a series of policy recommendations that pushed scaling back regulations and eliminating what a statement called 'ideological bias' in AI. The plan outlines priorities including expanding data center infrastructure and promoting American AI technology in both government and private sectors. The online discussion was moderated by Drew Clark, CEO of Breakfast Media and a longtime advocate for broadband expansion. As part of the organization's work to cover tech developments and broadband issues, Clark regularly hosts the weekly panels, which cover a wide range of topics related to internet policy. The most recent discussion touched on issues such as federal versus state AI regulations and the potential impact of AI on jobs. Here are a few key takeaways from the discussion. Best regulator remains unclear Since the president's reconciliation bill dropped a proposed AI moratorium, which would have barred states from regulating AI for 10 years, experts differ on how to best approach AI regulation. Sarah Oh Lam, senior fellow at the Technology Policy Institute, emphasized the need to strike a balance between protecting state interests and fostering AI innovation. She noted that while many existing state laws are narrow and sector-specific, targeting areas like employment or judicial proceedings, recent legislation in states like Colorado and California takes a much broader approach. 'I think it's more art than science … coming to the right balance of being able to set a floor to protect innovation and advancement of frontier models, but also letting states have some local jurisdiction,' Lam said. Chris Chambers Goodman, a professor at Pepperdine Caruso School of Law in Los Angeles, also acknowledged the downsides of a patchwork of state laws, warning that inconsistencies in definitions and regulatory scope could lead to compliance challenges and legal uncertainty. At the same time, she argued that states should serve as laboratories for experimenting with new regulations. The scholar, whose work focuses on equal protection issues including algorithmic bias, is concerned about the shift from the Biden administration's focus on safety and privacy to the Trump administration's push for rapid AI acceleration. 'We're supposed to let the states try things out, come up with rules and regulations, and then after studying if these have been effective, Congress could decide on legislation that was actually based on best practices,' Goodman said. How Trump's plan influences competition with China The experts also debated the role of China in shaping US AI policy, offering sharply contrasting views on whether the United States should treat artificial intelligence as a geopolitical race. While some panelists called for urgency and limited regulation to stay ahead, others warned that framing AI development as a race with China could lead to risky, short-sighted decisions. Adam Thierer, senior fellow at the free market-focused R Street Institute, argued that the US is locked in a 'stiff competition with China,' where leadership in AI has national security and ideological implications. He supported the Trump administration's new plan as a step toward fostering innovation and asserting American leadership in emerging technologies. 'It's not just about money and commerce,' Thierer said. 'It's also about values.' Professor Yonathan Arbel of the University of Alabama School of Law took a more cautious view. While he agreed that the US should remain competitive, he pushed back against the 'race' narrative, questioning what winning actually means in a fast-evolving field where breakthroughs are quickly matched. Arbel warned that racing ahead without proper safeguards could create harms that are difficult to undo, especially as AI systems become less transparent. 'I don't love the race metaphor, and I think it leads us down a very dark road where we have to win no matter what the price is,' Arbel said. The people's role in responsible AI use Goodman warned that AI use in government services can impact due process. She cited Covid-era welfare benefits systems that used algorithms to flag people as fraudulent based on frequent address changes, which were common due to the instability of the pandemic. 'The government owes its citizens and residents the right to due process,' she said. 'And when decisions are made by AI technologies that are infringing on those rights, then we really do have a big issue.' Lam also pointed out how humans still play a role in deploying and interpreting AI tools and how that can affect liability moving forward. 'Officials have choices between different models,' Lam said. 'So one pushback is: How is AI different from just software liability?' Maria Eberhart is a 2025-2026 corps member for Report for America, an initiative of The Groundtruth Project that pairs emerging journalists with local newsrooms. This position is supported in part by the Robert W. Deutsch Foundation and the Abell Foundation. Learn more about supporting our free and independent journalism.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store