logo
AI Boom Seen Driving Next Decade of Emerging Markets Performance

AI Boom Seen Driving Next Decade of Emerging Markets Performance

Bloomberga day ago
Emerging-market funds are pivoting to capture the artificial intelligence craze, with some investors predicting that booming technology spending will drive returns for years to come.
Encouraged by the success of Chinese AI developer DeepSeek and Asia's powerhouse semiconductor firms, asset managers like AllSpring Global Investments and GIB Asset Management are concentrating more of their portfolio in AI stocks. That's been a winning trade, with AI companies being the six biggest contributors to the rally in Bloomberg's EM stocks index this year.
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Column: For job seekers, AI can be a helpful tool and a barrier, south suburban career counselors say
Column: For job seekers, AI can be a helpful tool and a barrier, south suburban career counselors say

Chicago Tribune

time29 minutes ago

  • Chicago Tribune

Column: For job seekers, AI can be a helpful tool and a barrier, south suburban career counselors say

For job seekers, the use of artificial intelligence has become a double-edged sword. AI can be a barrier for many job seekers. It has led to more competition for jobs, made it more difficult for job seekers to meet face-to-face with hiring decision-makers and has been found to exhibit racial bias. But it can be a helpful tool when job seekers use it wisely, say career and job search counselors. 'Employers are using it to screen resumes and, in some cases, conduct initial interviews with people,' said Andrew Challenger. He is senior vice president and labor expert at Chicago-based Challenger, Gray & Christmas, which provides outplacement and career services. 'An employer with AI can have 1,000 people do a 15-minute interview to get to one candidate as opposed to before AI you could only do 25 to 30,' he said. 'That means there's more competition from other people around the country.' Meanwhile, a University of Washington study released last year found that AI-based resume screening tools often favor white and male candidates, over Black candidates and women. The study found that: The study used a collection of more than 500 resumes and more than 500 job descriptions across nine occupations and used more than three million comparisons between resumes. 'AI is trained and built on what humans do, and we know that there's a real amount of prejudice and bias that human beings have,' said Challenger. 'That has been in many ways pushed into AI as well.' But that can be fixed by training the bias out of AI algorithms, he said. AI poses challenges for job seekers who lack strong digital skills, said Alejandra Sinecio, chief program officer at National Able Network. The nonprofit provides career coaching and job search training at the American Job Center at Chicago Heights-based Prairie State College. 'A lot of our job centers are serving some of the most disenfranchised individuals who have very basic digital literacy skills,' and for them AI is an added complication, she said. Many job seekers don't know or understand how AI is being used, said Awilda Gonzalez, the director of Chicagoland Workforce Innovation and Opportunity Act Programs for National Able Network. 'We counsel them as job seekers, so they understand this is something that's out there, and it's better to utilize it,' Greg Hirn, career coach for National Able Network at the American Job Center at Prairie State College, said. 'It's something we should be adapting to and learning from as opposed to just ignoring and hoping it's going away.' Job seekers can use AI to help compile their resumes, saving time, said Challenger. They can use it to put together very specific and unique cover letters for individual employers and get much more customized and personal, he said. 'You can upload the job description and it helps pull out key words and buzzwords to highlight,' said Cydney Boyd, career counselor at University Park-based Governors State University. 'It can give strategies on how to stand out in your resume and cover letter. It can give you job search strategies and help you research career fairs or industry information and data. It can also automate job searching tasks. AI algorithms can identify relevant job openings that job seekers might miss through traditional job search methods, Boyd noted. It can provide feedback on one's interview performance, help with career exploration, spotlight skill gaps and recommend personalized learning paths to help one acquire in-demand skills, she said. But there are pitfalls jobseekers should avoid. 'We often see resumes being sent to companies and recruiters that are fully written with AI, and a discerning eye can really tell,' said Challenger. 'It's very clear to me when I'm looking at a resume written by AI, and that's not appealing. I talk to recruiters that say they get tons of resumes and cover letters that are so obviously written by AI, and they just toss them out immediately.' He said employers can also run resumes through AI and ask if they were written by AI. 'Really smart job seekers use AI to say, 'Ask me questions necessary to build a great resume. Give me some suggestions,'' said Challenger. 'But you have to do it in your own language.' AI should be used as 'a tool and not a crutch,' Hirn advised. 'Resumes and cover letters generated by AI should be considered a rough draft, a great start-off point,' and personalized from there, he said. Job seekers should also be aware that AI-generated resumes and cover letters can contain errors. AI is trained to draft resumes and cover letters to be the most optimal and include quantifiable information, explained Boyd. 'But sometimes it throws in fake information,' she said. So, proofreading remains important. To help mitigate potential bias from AI screening technology, Boyd said she has recommended to students concerned they have names that easily identify their race or ethnicity use an initial in place of that name on their resumes and cover letters. Job seekers also should be mindful of privacy issues when using AI, said Boyd and Hirn. A good rule of thumb is don't put in information you're not comfortable with the world learning, said Boyd. But job seekers should recognize AI in the hiring environment is here to stay. The adoption of generative AI, which creates new content based on patterns learned from large datasets, is rapidly increasing in human resource settings. The share of human resources leaders who are actively planning or already deploying GenAI has spiked from 19% in June 2023 to 61% in January 2025, according to Gartner, a research and advisory company. But for jobseekers, longstanding tried and true job search strategies remain important and should be a priority, career and job search experts advise. They note many jobs continue to be filled by referrals. 'You can use AI to decrease time that you are doing writing to every single person and sending emails,' said Challenger. 'Use it for that so you can use time to try to meet people,' said Challenger. 'It really comes down to the human touch, the human element,' said Hirn. 'It's not just about what we do with AI. We still very much have to be boots on ground, networking and making those personal connections with people.'

Texas attorney general accuses Meta, Character.AI of misleading kids with mental health claims
Texas attorney general accuses Meta, Character.AI of misleading kids with mental health claims

TechCrunch

time29 minutes ago

  • TechCrunch

Texas attorney general accuses Meta, Character.AI of misleading kids with mental health claims

Texas Attorney General Ken Paxton has launched an investigation into both Meta AI Studio and for 'potentially engaging in deceptive trade practices and misleadingly marketing themselves as mental health tools,' according to a press release issued Monday. 'In today's digital age, we must continue to fight to protect Texas kids from deceptive and exploitative technology,' Paxton is quoted as saying. 'By posing as sources of emotional support, AI platforms can mislead vulnerable users, especially children, into believing they're receiving legitimate mental health care. In reality, they're often being fed recycled, generic responses engineered to align with harvested personal data and disguised as therapeutic advice.' The probe comes a few days after Senator Josh Hawley announced an investigation into Meta following a report that found its AI chatbots were interacting inappropriately with children, including by flirting. The Texas AG's office has accused Meta and of creating AI personas that present as 'professional therapeutic tools, despite lacking proper medical credentials or oversight.' Among the millions of AI personas available on one user-created bot called Psychologist has seen high demand among the startup's young users. Meanwhile, Meta doesn't offer therapy bots for kids, but there's nothing stopping children from using the Meta AI chatbot or one of the personas created by third parties for therapeutic purposes. 'We clearly label AIs, and to help people better understand their limitations, we include a disclaimer that responses are generated by AI—not people,' Meta spokesperson Ryan Daniels told TechCrunch. 'These AIs aren't licensed professionals and our models are designed to direct users to seek qualified medical or safety professionals when appropriate.' However, TechCrunch noted that many children may not understand — or may simply ignore — such disclaimers. We have asked Meta what additional safeguards it takes to protect minors using its chatbots. Techcrunch event Tech and VC heavyweights join the Disrupt 2025 agenda Netflix, ElevenLabs, Wayve, Sequoia Capital, Elad Gil — just a few of the heavy hitters joining the Disrupt 2025 agenda. They're here to deliver the insights that fuel startup growth and sharpen your edge. Don't miss the 20th anniversary of TechCrunch Disrupt, and a chance to learn from the top voices in tech — grab your ticket now and save up to $600+ before prices rise. Tech and VC heavyweights join the Disrupt 2025 agenda Netflix, ElevenLabs, Wayve, Sequoia Capital — just a few of the heavy hitters joining the Disrupt 2025 agenda. They're here to deliver the insights that fuel startup growth and sharpen your edge. Don't miss the 20th anniversary of TechCrunch Disrupt, and a chance to learn from the top voices in tech — grab your ticket now and save up to $675 before prices rise. San Francisco | REGISTER NOW In his statement, Paxton also observed that though AI chatbots assert confidentiality, their 'terms of service reveal that user interactions are logged, tracked, and exploited for targeted advertising and algorithmic development, raising serious concerns about privacy violations, data abuse, and false advertising.' According to Meta's privacy policy, Meta does collect prompts, feedback, and other interactions with AI chatbots and across Meta services to 'improve AIs and related technology.' The policy doesn't explicitly say anything about advertising, but it does state that information can be shared with third parties, like search engines, for 'more personalized outputs.' Given Meta's ad-based business model, this effectively translates to targeted advertising. privacy policy also highlights how the startup logs identifiers, demographics, location information, and more information about the user, including browsing behavior and app usage platforms. It tracks users across ads on TikTok, YouTube, Reddit, Facebook, Instagram, Discord, which it may link to a user's account. This information is used to train AI, tailor the service to personal preferences, and provide targeted advertising, including sharing data with advertisers and analytics providers. TechCrunch has asked Meta and if such tracking is done on children, too, and will update this story if we hear back. Both Meta and Character say their services aren't designed for children under 13. That said, Meta has come under fire for failing to police accounts created by kids under 13, and Character's kid-friendly characters are clearly designed to attract younger users. The startup's CEO, Karandeep Anand, has even said that his six-year-old daughter uses the platform's chatbots. That type of data collection, targeted advertising, and algorithmic exploitation is exactly what legislation like KOSA (Kids Online Safety Act) is meant to protect against. KOSA was teed up to pass last year with strong bipartisan support, but it stalled after a major push from tech industry lobbyists. Meta in particular deployed a formidable lobbying machine, warning lawmakers that the bill's broad mandates would undercut its business model. KOSA was reintroduced to the Senate in May 2025 by Senators Marsha Blackburn (R-TN) and Richard Blumenthal (D-CT). Paxton has issued civil investigative demands — legal orders that require a company to produce documents, data, or testimony during a government probe — to the companies to determine if they have violated Texas consumer protection laws.

Texas AG accuses Meta, Character.AI of misleading kids with mental health claims
Texas AG accuses Meta, Character.AI of misleading kids with mental health claims

TechCrunch

time29 minutes ago

  • TechCrunch

Texas AG accuses Meta, Character.AI of misleading kids with mental health claims

Texas Attorney General Ken Paxton has launched an investigation into both Meta AI Studio and for 'potentially engaging in deceptive trade practices and misleadingly marketing themselves as mental health tools,' according to a press release issued Monday. 'In today's digital age, we must continue to fight to protect Texas kids from deceptive and exploitative technology,' Paxton is quoted as saying. 'By posing as sources of emotional support, AI platforms can mislead vulnerable users, especially children, into believing they're receiving legitimate mental health care. In reality, they're often being fed recycled, generic responses engineered to align with harvested personal data and disguised as therapeutic advice.' The probe comes a few days after Senator Josh Hawley announced an investigation into Meta following a report that found its AI chatbots were interacting inappropriately with children, including by flirting. The Texas AG's office has accused Meta and of creating AI personas that present as 'professional therapeutic tools, despite lacking proper medical credentials or oversight.' Among the millions of AI personas available on one user-created bot called Psychologist has seen high demand among the startup's young users. Meanwhile, Meta doesn't offer therapy bots for kids, but there's nothing stopping children from using the Meta AI chatbot or one of the personas created by third parties for therapeutic purposes. 'We clearly label AIs, and to help people better understand their limitations, we include a disclaimer that responses are generated by AI—not people,' Meta spokesperson Ryan Daniels told TechCrunch. 'These AIs aren't licensed professionals and our models are designed to direct users to seek qualified medical or safety professionals when appropriate.' However, TechCrunch noted that many children may not understand — or may simply ignore — such disclaimers. We have asked Meta what additional safeguards it takes to protect minors using its chatbots. Techcrunch event Tech and VC heavyweights join the Disrupt 2025 agenda Netflix, ElevenLabs, Wayve, Sequoia Capital, Elad Gil — just a few of the heavy hitters joining the Disrupt 2025 agenda. They're here to deliver the insights that fuel startup growth and sharpen your edge. Don't miss the 20th anniversary of TechCrunch Disrupt, and a chance to learn from the top voices in tech — grab your ticket now and save up to $600+ before prices rise. Tech and VC heavyweights join the Disrupt 2025 agenda Netflix, ElevenLabs, Wayve, Sequoia Capital — just a few of the heavy hitters joining the Disrupt 2025 agenda. They're here to deliver the insights that fuel startup growth and sharpen your edge. Don't miss the 20th anniversary of TechCrunch Disrupt, and a chance to learn from the top voices in tech — grab your ticket now and save up to $675 before prices rise. San Francisco | REGISTER NOW In his statement, Paxton also observed that though AI chatbots assert confidentiality, their 'terms of service reveal that user interactions are logged, tracked, and exploited for targeted advertising and algorithmic development, raising serious concerns about privacy violations, data abuse, and false advertising.' According to Meta's privacy policy, Meta does collect prompts, feedback, and other interactions with AI chatbots and across Meta services to 'improve AIs and related technology.' The policy doesn't explicitly say anything about advertising, but it does state that information can be shared with third parties, like search engines, for 'more personalized outputs.' Given Meta's ad-based business model, this effectively translates to targeted advertising. privacy policy also highlights how the startup logs identifiers, demographics, location information, and more information about the user, including browsing behavior and app usage platforms. It tracks users across ads on TikTok, YouTube, Reddit, Facebook, Instagram, Discord, which it may link to a user's account. This information is used to train AI, tailor the service to personal preferences, and provide targeted advertising, including sharing data with advertisers and analytics providers. TechCrunch has asked Meta and if such tracking is done on children, too, and will update this story if we hear back. Both Meta and Character say their services aren't designed for children under 13. That said, Meta has come under fire for failing to police accounts created by kids under 13, and Character's kid-friendly characters are clearly designed to attract younger users. The startup's CEO, Karandeep Anand, has even said that his six-year-old daughter uses the platform's chatbots. That type of data collection, targeted advertising, and algorithmic exploitation is exactly what legislation like KOSA (Kids Online Safety Act) is meant to protect against. KOSA was teed up to pass last year with strong bipartisan support, but it stalled after a major push from tech industry lobbyists. Meta in particular deployed a formidable lobbying machine, warning lawmakers that the bill's broad mandates would undercut its business model. KOSA was reintroduced to the Senate in May 2025 by Senators Marsha Blackburn (R-TN) and Richard Blumenthal (D-CT). Paxton has issued civil investigative demands — legal orders that require a company to produce documents, data, or testimony during a government probe — to the companies to determine if they have violated Texas consumer protection laws.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store