logo
#

Latest news with #genAI

Business Insider lays off 21 percent of staff to 'endure extreme traffic drops.'
Business Insider lays off 21 percent of staff to 'endure extreme traffic drops.'

The Verge

time2 days ago

  • Business
  • The Verge

Business Insider lays off 21 percent of staff to 'endure extreme traffic drops.'

As reported by The Information and Axios reporter Sara Fischer, CEO Barbara Peng emailed staff on Thursday announcing Business Insider is 'scaling back on categories that once performed well on other platforms' and mostly exiting its search-reliant Commerce business in an apparent acknowledgement of Google Zero, despite Sundar Pichai's rebuttals. Now it's shrinking, noting '70 percent of our business has some degree of traffic sensitivity,' while going all-in on AI with a push to use Enterprise ChatGPT, gen-AI site search, an AI paywall, and other products.

Want an advanced AI assistant? Prepare for them to be all up in your business
Want an advanced AI assistant? Prepare for them to be all up in your business

Yahoo

time3 days ago

  • Business
  • Yahoo

Want an advanced AI assistant? Prepare for them to be all up in your business

The growing proliferation of AI-powered chatbots has led to debates around their social roles as friend, companion or work assistant. And they're growing increasingly more sophisticated. The role-playing platform Character AI promises personal and creative engagement through conversations with its bot characters. There have also been some negative outcomes: currently, is facing a court case involving its chatbot's role in a teen's suicide. Others, like ChatGPT and Google Gemini, promise improved work efficiency through genAI. But where is this going next? Amid this frenzy, inventors are now developing advanced AI assistants that will be far more socially intuitive and capable of more complex tasks. The shock instigated by OpenAI's ChatGPT two years ago was not only due to the soaring rate of adoption and the threat to jobs, but also because of the cultural blow it aimed at creative writing and education. My research explores how the hype surrounding AI affects some people's ability to make professional judgments about it. This is due to anxiety related to the vulnerability of human civilization, feeding the idea of a future 'superintelligence' that might outpace human control. With US$1.3 trillion in revenue projected for 2032, the financial forecast for genAI drives further hype. Mainstream media coverage also sensationalizes AI's creativity, and frames the tech as a threat to human civilization. Scientists all over the world have signalled an urgency around the implementations and applications of AI. Geoffrey Hinton, Nobel Prize winner and AI pioneer, left his position at Google over disagreements about the development of AI and regretted his work at Google because of AI's progress. The future threat, however, is much more personal. The turn in AI underway now is a shift toward self-centric and personalized AI tools that go well beyond current capabilities to recreating what has become a commodity: the self. AI technologies reshape how we perceive ourselves: our personas, thoughts and feelings. The next wave of AI assistants, a form of AI agents, will not only know their users intimately, but they will be able to act on a user's behalf or even impersonate them. This idea is far more compelling than those that only serve as assistants writing text, creating video or coding software. These personalized AI agents will be able to determine intentions and carry out work. Iason Gabriel, senior research scientist at Google DeepMind, and a large team of researchers wrote about the ethical development of advanced AI assistants. Their research sounds the alarm that AI assistants can 'influence user beliefs and behaviour,' including through 'deception, coercion and exploitation.' There is still a techno-utopian aspect to AI. In a podcast, Gabriel ruminates that 'many of us would like to be plugged into a technology that can take care of a lot of life tasks on our behalf,' also calling it a 'thought partner.' This more recent turn in AI disruption will interfere with how we understand ourselves, and as such, we need to anticipate the techno-cultural impact. Online, people express hyper-real and highly curated versions of themselves across platforms like X, Instagram or Linkedin. And the way users interact with personal digital assistants like Apple's Siri or Amazon's Alexa has socialized us to reimagine our personal lives. These 'life narrative' practices inform a key role in developing the next wave of advanced assistants. The quantified self movement is when users track their lives through various apps, wearable technologies and social media platforms. New developments in AI assistants could leverage these same tools for biohacking and self-improvement, yet these emerging tools also raise concerns about processing personal data. AI tools involve the risk of identity theft, gender and racial discrimination and various digital divides. Human-AI assistant interaction can converge with other fields. Digital twin technologies for health apply user biodata. They involve creating a virtual representation of a person's physiological state and can help predict future developments. This could also lead to over-reliance on AI Assistants for medical information without human oversight from medical professionals. Other advanced AI assistants will 'remember' people's pasts and infer intentions or make suggestions for future life goals. Serious harms have already been identified when remembering is automated, such as for victims of intimate partner violence. Read more: We need to expand data protections and governance models to address potential privacy harms. This upcoming cultural disruption will require regulating AI. Let's prepare now for AI's next cultural turn. This article is republished from The Conversation, a nonprofit, independent news organisation bringing you facts and trustworthy analysis to help you make sense of our complex world. It was written by: Isabel Pedersen, Ontario Tech University Read more: Can you upload a human mind into a computer? A neuroscientist ponders what's possible Meta's new AI chatbot is yet another tool for harvesting data to potentially sell you stuff Major survey finds most people use AI regularly at work – but almost half admit to doing so inappropriately Isabel Pedersen receives funding from the Social Sciences and Humanities Research Council of Canada (SSHRC).

To Create Value with AI, Improve the Quality of Your Unstructured Data
To Create Value with AI, Improve the Quality of Your Unstructured Data

Harvard Business Review

time3 days ago

  • Business
  • Harvard Business Review

To Create Value with AI, Improve the Quality of Your Unstructured Data

A company's content lies largely in 'unstructured data'—those emails, contracts, forms, Sharepoint files, recordings of meetings and so forth created via work processes. That proprietary content makes gen AI more distinctive, more knowledgeable about your products and services, less likely to hallucinate, and more likely to bring economic value. As a chief data officer we interviewed pointed out, 'You're unlikely to get much return on your investment by simply installing CoPilot.' Many companies have concluded that the most value from gen AI lies in combining the astounding language, reasoning, and general knowledge of large language models (LLMs) with their own proprietary content. That combination is necessary, for example, in enterprise-level gen AI applications in customer service, marketing, legal, and software development, and product/service offerings for customer use. The most common approach by far to adding a company's own content is 'retrieval augmented generation' or RAG, which combines traditional information-gathering tools like databases with information retrieved by LLMs. It is used because submitting vast quantities of content in a prompt is often technically infeasible or expensive. While technically complex, the RAG approach is quite feasible and yields accurate responses to user prompts if the unstructured data used in RAG is of high quality. Therein lies the problem. Unstructured data is frequently of poor quality—obsolete, duplicative, inaccurate, and poorly-structured, among other problems. Most companies have not done well with the quality of structured data, even as this data is used every day to complete business transactions and understand performance. Unstructured data is tougher. The last serious attempts to address unstructured data date to the 1990s and 2000s when knowledge management was popular. Most efforts proved unsuccessful. Surveys confirm that most leaders are aware that poor quality hampers their generative AI efforts, and that they did not have a strong focus on unstructured data until the advent of gen AI. Of course, the best way to deal with data quality problems is to prevent them. Over the long-term, companies serious about AI must develop programs to do just that. Those who create documents, for example, need to learn to evaluate them for quality and tag key elements. But this will take much concerted effort and is no help in the short term. To get value from gen AI, companies need to build RAG applications using high-quality unstructured data. Our objective in this article is to help them do so by summarizing the most important data problems and the best approaches for dealing with them, both human and technical. What Is Data Quality for Unstructured Data? High-quality data, whether structured or unstructured, only results from focused effort, led by active, engaged leadership, some well-placed professionals, clear management responsibilities for all who touch data, and a relentless commitment to continuous improvement. Absent these things, chances are high your data is not up-to-snuff. As coach and advisor, Alex Borek of the Data Masterclass told us, 'When AI doesn't work, it often reveals flaws in the human system.' Indeed, the best estimate is that 80% of time spent on an AI project will be devoted to data. For example, a Philippines-based Morgan Stanley team spent several years curating research reports in advance of their AI @ Morgan Stanley assistant project. The curation started before gen AI became widespread, which allowed Morgan Stanley to more quickly get their application into production. To work effectively, RAG requires documents directly relevant to the problem at hand, a minimum of duplicated content, and the information contained in those documents to be complete, accurate, and up-to-date. Further, as Seth Earley of Earley Information Science noted, 'You must supply context, as much as possible, if a LLM is to properly interpret these documents.' Unstructured data does not come pre-loaded with the needed context, and gen AI is largely incapable of determining what is the best information to solve a particular business question or issue. It is also not good at 'entity resolution,' i.e., 'Is this 'John Smith' in document A, about customers, the same person as 'J. A. Smith' in that document B, about vendors, and/or the same person as 'Mr. J Smith' in the other document C, about a donation to our foundation?' Most structured data is defined in a data model or dictionary. This provides some context and helps reduce the John Smith/J. A. Smith problem described above. For structured data it is easier to find the data desired, learn who is responsible for it, and understand what the data means. As John Duncan, the head of data governance for the large car retailer CarMax told us, unstructured data also requires the same need for clarity on data ownership, producers, consumers, and stewards. It also benefits from standards for data quality thresholds, data lineage, access controls, and retention durations. This metadata is typically included in a data dictionary. However, with unstructured data, there is seldom a dictionary. Often there is no centralized management of such content; documents are stored haphazardly using different naming conventions and on different computers or cloud providers across the company. There is often no common definition of a content type; an ad agency data leader confessed that there is no common definition of a 'pitch' across the agency. Finally, unstructured documents were often developed with a different purpose than feeding gen AI. A contract with a supplier, for example, was not designed to provide insight about the level of risk in a supplier relationship. We believe it was the late management thinker Charles Handy who observed, 'Information gathered for one purpose is seldom useful for another.' An Unstructured Data Quality Process Fortunately, there are several approaches and tools that can help to improve unstructured data. We recommend that all AI projects follow a disciplined process, building quality in wherever they can. Such a process must embrace the following steps: Address unstructured data quality issues problem by problem, not all at once. Identify and assess the data to be used. Assemble the team to address the problem. Prepare the data, employing both humans (D1) and AI (D2), when possible. Develop your application and validate that it works. Support the application and try to inculcate quality in content creation processes. 1. Address unstructured data quality issues problem by problem, not all at once. There is too much unstructured data to improve all at once. Project leaders should ensure that all involved agree on the problem/opportunity to be addressed. Priorities should be based first on the value to the business of solving the problem, and second on the feasibility and cost of developing a solution—including data quality improvement. Areas of the business with data that is already of reasonably good quality should receive higher priority. That's the approach Nelson Frederick Bamundagea, IT director at the truck refrigeration servicing company W&B Services, has taken. His knowledge retrieval application for service technicians uses the schematics of some 20 (refrigerator) models provided by two manufacturers. These have been used over and over and the vocabulary employed is relatively small, providing for a high level of trust. More generally, Alex Borek advises companies to 'first look to highly curated data products whenever possible.' 2. Identify and assess the data to be used. Since the data is critical to the success of an LLM-based knowledge project, it's important to assess the data at an early stage. There is a human tendency to include any possibly relevant document in a RAG, but companies should adopt a healthy skepticism and a 'less is more' philosophy: absent a good reason to trust a document or content source, don't include it. It's not likely that experts can evaluate every document, but they can dig deeply into a small sample. Are the sample documents loaded with errors, internal inconsistencies, or confusing language—or are they relatively clean? Use your judgment: Keep clean data and proceed with caution; toss bad data. If the data are in horrible shape or you can't find enough good data, reconsider the project. 3. Assemble the team to address the problem. Given the need for some human curation of unstructured data, it's unlikely that a small team of experts can accomplish the necessary work. In addition, those who work with the data from day-to-day typically have a better idea of what constitutes high quality and how to achieve it. In many cases, then, it may be helpful to make data quality improvement a broadly participative project. For example, at Scotiabank, the contact center organization needed to curate documents for a customer chatbot. Center staff took responsibility for the quality of its customer support knowledge base and ensured that each document fed into the RAG-based chatbot was clear, unique, and up to date. 4a. Prepare the data. If you've concluded—and you should—that there must be a human contribution to improving unstructured data quality, this is the time to engage it. That contribution could include having a stakeholder group agree on the key terms—e,g., 'contract,' 'proposal,' 'technical note,' and 'customer' might be examples—and how they are defined. Document this work in a business glossary. This can be hard: Consistent with 'Davenport's Law'—first stated more than 30 years ago —the more an organization knows or cares about a particular information element, the less likely it is to have a common term and meaning for it. This issue can be overcome through 'data arguing' (not data architecture) until the group arrives at a consensus. And, of course, if there is a human curation role, this is the time to begin it. That entails deciding which documents or content sources are the best for a particular issue, 'tagging' it with metadata, and scoring content on such attributes as recency, clarity, and relevance to the topic. Morgan Stanley has a team of 20 or so analysts based in the Philippines that scores each document along 20 different criteria. 4b. Prepare the data with AI. Gen AI itself is quite good at some tasks needed to prepare unstructured data for other gen AI applications. It can, for example, summarize content, classify documents by category of content, and tag key data element. For example, CarMax uses generative AI to translate different car manufacturers' specific language for describing automotive components and capabilities into a standard set of descriptions that is meant to enable a consumer to compare cars across manufacturers. Gen AI can also create good first drafts of 'knowledge graphs,' or displays of what information is related to other information in a network. Knowledge graphs improve the ability of RAG to find the best content quickly. Gen AI is also good at de-duplication, or the process of finding exact or very similar copies of documents and eliminating all but one. Since RAG approaches pick documents based on specified criteria, these criteria (recency, authorship, etc.) can be changed ('re-ranked') to give higher weight to certain ones in content search. We have found, however, that AI is not particularly good at identifying the best document in a set of similar ones, even when given a grading rubric. For that and reviewing tasks humans are still necessary. As a starting point, we recommend using humans to figure what needs to be done, and machines to increase scale and decrease unit cost in execution. 5. Develop your application and validate that it works. The process of developing a RAG models from curated data involves several rather technical steps, best performed by qualified tech staff. Even after having done everything possible to prepare the data, it is essential that organizations rigorously test their RAG applications before putting them into production. This is particularly important for applications that are highly regulated or involve human well-being. One way to validate the model involves identifying '50 Golden Questions,' in which a team identifies questions that the RAG must get right, determines whether it does so, and acts accordingly. The validation should be done over time given that foundational LLMs change often. When a European insurer tried to validate its system for knowledge on how to address claims, it found that customers' contracts, call center personnel, the company's knowledge base, and the claims department often disagreed. This led the company to clarify that the Claims Department 'owned' the answer, i.e., served as the 'gold standard.' Changes to the chatbot, customer contracts, and call center training followed. 6. Support the application and try to inculcate ongoing quality. As a practical matter, no RAG application will enjoy universal acclaim the minute it is deployed. The application can still hallucinate, there will be bugs to be worked out and some level of customer dissatisfaction. We find that some discount a well-performing RAG application if it makes any errors whatsoever. Finally, changes will be needed as the application is used in new ways. So, plan for ongoing quality management and improvement. The plan should include: Some amount of 'qualified-human-in-the-loop' especially in more critical situations A means to trap errors, conduct root cause analysis, and prevent them going forward Efforts to understand who the customers of the RAG are, how they use it, and how they define 'good' Feedback to managers responsible for business processes that create unstructured data to improve future inputs. Content creators can be trained, for example, to create higher-quality documents, tag them as they create, and add them to a central repository. It appears that RAG, featuring proprietary content, combined with LLMs, is going to be with us for the foreseeable future. It is one of the best ways to gain value from gen AI if one can feed models high-quality unstructured data. We know there is a lot here, but it is certainly within reach of those who buckle down and do the work.

AI Tool to Answer Patient Queries Has Mixed Performance
AI Tool to Answer Patient Queries Has Mixed Performance

Medscape

time21-05-2025

  • Health
  • Medscape

AI Tool to Answer Patient Queries Has Mixed Performance

A novel generative artificial intelligence (genAI) tool for drafting physician responses to patients' online questions showed mixed results, according to a poster presented recently at the Society of General Internal Medicine (SGIM) 2025 Annual Meeting in Hollywood, Florida. Researchers led by Natalie Lee, MD, MPH, assistant professor of internal medicine at The Ohio State University College of Medicine in Columbus, Ohio, conducted a thematic analysis of 201 genAI responses to real online questions from patients. The AI generator, an Epic draft tool based on GPT-3.5, was trained with four to five rounds of revisions to build the model. The team grouped responses into five categories: Building rapport, gathering information, delivering information accurately and professionally, facilitating next steps and patient decisions, and responding to emotion. Primary care provider reviewers developed a coding system to define strengths and limitations for each domain. '73% of Drafts Had Limitations' 'We observed both strengths and limitations across all communication domains,' Lee told Medscape Medical News. The data show that 60% of drafts were deemed 'usable,' and 73% had limitations. In the category of information delivery, for instance, the model performed well in providing accurate information about what to expect with a medical test but gave information about blood donation when the patient's question was about kidney donation. The researchers reported that 65% of AI information delivery responses made incorrect assumptions about clinical context, 36% made incorrect assumptions about workflow, and 24% of the responses were 'simply wrong.' In responding to emotion, an example limitation was seen when a patient expressed long-standing emotional suffering related to a condition and the reply did not acknowledge patient emotion. Overall, the researchers wrote, 'While GenAI drafts often contain usable portions, net value added may be limited by inadequacies in more cognitively demanding, critical domains like providing accurate information, asking key questions, and responding to patient emotions. Further optimization is needed to genuinely enhance patient care and provider experience.' This work provides more data about the usefulness of AI as the need for assistance in fielding patients' questions has intensified with unprecedented volume. 'How do we integrate generative AI technology so that we uphold communication and care quality while also reducing provider burden? It's not clear we have that figured out,' she said. Framework Adds Novel Assessment Owais Durrani, DO, an emergency medicine physician with Memorial Hermann Health System in Houston, told Medscape Medical News that the authors' use of a novel framework to assess the quality of medical communications made this study particularly significant. 'Too often, we judge AI tools on surface-level efficiency metrics — how many clicks saved, how many seconds shaved off. But this study looked at deeper clinical domains. The finding that AI performed better in low-risk areas like enabling next steps or basic rapport-building but struggled in areas like delivering accurate information or responding to emotion really highlights where the gaps are. It's a reminder that AI can support the clinical process but shouldn't replace human judgment, especially in patient interactions that require empathy, nuance, or clinical reasoning.' He said he's open to AI communications, but with caution. 'Many of the drafts are overly verbose, vague, or miss the clinical nuance required to answer a patient's real question,' he said. 'Physicians then have to sift through the noise and rework the content, which in many cases takes as much — or more — time than writing it ourselves. For now, I view AI-generated messages more as a starting point than a solution.' Lee and Durrani reported having no relevant financial relationships.

A Letter to the Class of 2025
A Letter to the Class of 2025

Time​ Magazine

time11-05-2025

  • Business
  • Time​ Magazine

A Letter to the Class of 2025

By and Michelle Peng If you're graduating this year, you've already lived through more upheaval than most generations experience in a decade. Your formative educational experiences were marked by a global pandemic, remote learning, rising inflation, political unrest, and a job market that seems to shift monthly. And now, you're entering a workforce being rapidly reshaped by AI, hybrid work, and new expectations about what a career even means. Everything that you've endured has already exercised one of the most critical skills you'll need in your career and life— resilience. The ability to adapt, stay grounded through uncertainty, to keep going when the path isn't clear, isn't just a personal strength. It's a professional superpower. In a world where change is the only constant, resilience will be one of the most valuable assets you can bring into your will also carry you forward. Whether you've already landed the job you hoped for, you're still searching, or you're trying to make sense of the one you've got, that strength will serve as your foundation. Wherever you're starting from, here's some wisdom to help you navigate the early steps of your career with clarity, confidence, and perspective. You've already landed your dream job. First of all, congratulations! We don't have to tell you that this is a tough job market, with hiring for new grads down 16% compared to last year and 44% below 2022 levels, according to new data from Gusto. To hit the ground running in your first job, here are some things to keep in mind: Keep using AI in a way that helps you grow, think, and lead. We know that most of you are already using tools like ChatGPT regularly, and 77% of new grads expect to use genAI in the workplace. While Charter has covered the ways that improper AI use can lead to lower-quality work, make work less satisfying, and even annoy your peers, we're confident that using AI in combination with your critical thinking skills can make work more productive and rewarding. As our colleague Jacob Clemente recently wrote, mastering how to direct AI agents may also help you practice key leadership skills and behaviors for managing human teams, including 'fluid intelligence—i.e., the ability to solve novel problems—emotional perceptiveness, and economic decision-making, which refers to how good someone is at allocating resources,' as well as 'asking more questions and having more back-and-forth exchanges.' Because genAI makes it easier than ever to generate reports, ideas, and even prototypes, your ability of discernment can set you apart from your peers. 'Sharpen your opinions and taste, because it matters even more when AI can do the drudgery and do the mundane work,' says Aparna Chennapragada, chief product officer of experiences and devices management at Microsoft. 'What makes a difference is the why… And that's where your human judgment comes in.' That means staying curious and constantly learning, whether that's asking questions about how things work, trying out a new AI use case, or setting aside time to learn about what your peers, superiors, and superiors' peers are working on. We've also seen again and again the value of building trust and deepening relationships for workers at any level. One of the best ways to get ahead is to lift others up—an idea that's backed up by organizational psych research on status at work. Take opportunities to express your gratitude for teammates' contributions and highlight other people's wins, a practice Carnegie Mellon researcher Rosalind Chow calls ' good gossip.' Research indicates that good gossip and ' dual promotion,' a process where you highlight your own and others' achievements at the same time, can benefit you, too. 'As you make other people more visible, you also become more visible,' explains Chow. You have… a job. Landing a job—not the job—can feel like a relief and let down simultaneously. Maybe it's not your dream role. Maybe it's not even close. And while everyone on LinkedIn is 'thrilled to announce' something shiny and seemingly perfect you're trying to reconcile this first step and make it feel like it makes a job is not a failure. It's a starting point and it's more normal than you think. According to the Bureau of Labor Statistics, the average person holds over 12 different jobs by the time they're 55. In fact, many of the most successful people you admire didn't start with roles that lit them up. What matters more than the title or company is how you use the experience. What are you learning? What skills are you building? What relationships will you nurture? What parts of the job give you energy? What parts take it away? This is where the idea of a portfolio career comes in. The old model, pick a lane, stay in it, is fading to give way to something more dynamic. Instead, you can expect and should be intentional about your career being a collection of roles, experiences, side projects, and pivots that will build on each other over time. Portfolio careers are diverse in nature, you don't necessarily have to be both a doctor and a lawyer, nor do you need to get paid for each aspect of your portfolio, but the yield is diversification in your identity, skillset, and your ability to pivot, navigate the throughline isn't the job titles, It's you. Your values, your curiosity, your adaptability, and how you treat people will shape what's ahead of you. The story that you're telling with your work is only one that you can write and that means that you can change direction and evolve. The key is knowing how to name the shifts, why you moved, what you gained, and what you're aiming toward. When you can articulate your own narrative, it not only gives you clarity, it also helps others understand your journey, advocate for you, and open doors to new opportunities. You're still looking. Don't stress—landing that offer letter often takes longer than you'd expect. One survey of 2025 graduates found that 82% expect to start working within three months of graduating, while 77% of recent graduates say they started work that quickly. If you're looking to game out a job search strategy based on labor market trends, recent reporting reveals some sure bets: manufacturing is expected to add millions of jobs in the coming years, healthcare and education employment typically remains stable during recessions (and are relatively insulated from the effects of AI automation), and some of the fastest growing job titles are ' middle-skill ' roles that come with a six-figure salary and do not require a college degree. But with portfolio careers becoming increasingly common, skills can come from unexpected places, so we encourage you to follow your curiosities rather than trying to divine future labor market fluctuations. If you have the bandwidth and the financial flexibility to do so, take on some projects that flex new skills on a volunteer or freelance basis. (That's how I (Michelle) first started working with Charter—as one of many paid and unpaid projects doing research, growth marketing, and administrative support for organizations as diverse as a restorative justice nonprofit, a hospitality workers' union, a design firm, and a college alumni group.) Living in Los Angeles, I've also seen countless instances of college-educated peers gaining invaluable skills in nontraditional jobs. Friends have taken jobs bartending, delivering bagels, and serving wedding banquets to make ends meet due to a historic contraction of the entertainment industry and the gig-based nature of creative work. Through those experiences, they've practiced leading teams, delegating responsibilities amid competing deadlines, and building relationships in high-stress environments, preparing them both for any future big job and the possibility of that big job ending. When in doubt, there's never a shortage of work that requires working with others to make your community better. Many of my friends who have left the entertainment industry are pivoting into work that feels fundamentally human and requires the kind of connections that are unlikely to be entirely displaced by technology, including teaching, therapy, and community organizing. That's what I did in summer 2020, jobless and fresh off a post-grad fellowship. I found my way to Arizona, where I joined a political campaign and spent eight hours a day talking to strangers about the upcoming election. Spending eight hours a day under the desert sun, decked out in a KN95, a plastic face shield, and a fluorescent safety vest, was not how I expected spending my second year out of college. But the time I spent connecting with voters about issues as diverse as healthcare, education, and immigration gave me a crash course in how to tell a compelling story and build the trust necessary to hear others'—a skill I'll use everyday in my current role and all the years to come.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store