logo
21-year-old St Andrews Uni student charging firms thousands for her AI skills

21-year-old St Andrews Uni student charging firms thousands for her AI skills

The Courier2 days ago

A St Andrews University undergraduate has set up a successful AI consultancy business in her spare time.
Ideja Bajra is currently studying for a degree in cell biology at the prestigious Fife university.
However, fascinated by the world of artificial intelligence, she decided to teach herself the basics.
Fast forward a few months and the 21-year-old is now an AI expert working with top firms including the Royal Bank of Scotland.
London-born Ideja, who is due to graduate later this month, officially launched Edvance AI in April last year.
In her first year of business, the undergraduate student worked with 10 different clients, providing one-off workshops and consulting retainers.
She said: 'When I was introduced to AI, I became absolutely fixated on it. I took to YouTube to upskill myself.
'It took at least nine months of intensive scavenging the internet for everything I could find.
'I came from a non-technical background and didn't attend any courses. I just watched videos and read as much as I could.
'Then I decided to start a consulting firm that focuses on the application of AI in the workplace.
'I have worked with the Royal Bank of Scotland, PGIM Asset Management, and Alpha Real Capital. Some others I can't mention because they are under NDAs.'
Now that Ideja's university exams are coming to an end, she plans to continue growing her business. She is currently finalising deals with five more companies.
Ideja offers differing price models depending on the size and literacy level of the company.
This includes assessments of AI literacy and implementation strategies to help organisations use the technology effectively.
She continued: 'The workshops are tailored to the client's level of AI literacy. We upskill the team and help them understand the fundamentals.
'We also offer consulting, where we audit an entire company or a specific division and implement AI solutions, over a few months.
'Everything is tailored to the client, but our base price for consulting starts at £25,000 for a multi-month retainer and one-off workshops are £2,500.'
When it comes to the future of AI, Ideja prefers to focus on the positive potential the controversial technology holds.
She added: 'I like to see AI as a tool to automate day-to-day tasks that are mundane and boring.
'This way, it frees up time for us to be more creative and pursue what we're truly passionate about.
'At the moment, it's just me and seven freelancers who help on projects and get paid a percentage of the profit.
'In the future, I want Edvance AI to have a full-time team and support as many people as possible.'

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

‘I'm the world's youngest self-made female billionaire'
‘I'm the world's youngest self-made female billionaire'

Telegraph

timean hour ago

  • Telegraph

‘I'm the world's youngest self-made female billionaire'

A 30-year-old US tech entrepreneur born to immigrant parents has unseated Taylor Swift as the world's youngest self-made female billionaire. Lucy Guo, who is worth an estimated $1.3bn (£1bn) according to Forbes, told The Telegraph that her new title 'doesn't really feel like much'. 'I think that maybe reality hasn't hit yet, right? Because most of my money is still on paper,' she said. Ms Guo's wealth stems from her 5pc stake in Scale AI, a company she co-founded in 2016. The artificial intelligence (AI) business is currently raising money in a deal likely to value it at $25bn. That valuation – and the billionaire status it has bestowed upon Ms Guo – underlines the current AI boom, which has reinvigorated Silicon Valley and is now reshaping the world. Everyone from Mark Zuckerberg to Sir Keir Starmer have praised the potential of the technology, which is forecast to save billions but may also destroy scores of jobs. The AI craze has caused the founders and chief executives of companies in the space to climb the world's rich list as they cash in on soaring valuations and increasing demand for their companies' technologies. Ms Guo is also an exemplar of the American dream. Born to Chinese immigrant parents, she dropped out of Carnegie Mellon University to find her fortune. Like Mr Zuckerberg before her, the decision to ditch traditional education in favour of entrepreneurship has now paid off handsomely. Still, it was not a decision her parents approved of at the time. 'They stopped talking to me for a while – which is fine,' she said. 'I get it, because, you know, the immigrant mentality was like, 'we sacrificed everything, we came to a new country, left all our relatives behind, to try to give our kids a better future'. 'I think they viewed it as a sign of disrespect. They're like, 'wow, you don't appreciate all the sacrifices we did for you, and you don't love us'. So they were extremely hurt.' They have since reconciled. In her first year of college, Ms Guo took part in hackathons and coding competitions, helping her to realise that 'you can just create a startup out of like, nothing'. She was awarded a Thiel Fellowship, which provides recipients with $200,000 over two years to support them to drop out of university and pursue other work, such as launching a startup. The fellowship is funded by Peter Thiel, the former PayPal chief executive. Mr Thiel, who donated $1.25m to Donald Trump's 2016 presidential campaign, has been an enthusiastic supporter of entrepreneurship, and also co-founded Palantir, the data analytics and AI software firm now worth billions. Ms Guo initially tried to found a company based around people selling their home cooking to others. While the business did well financially, it faced food safety problems and ultimately failed. After stints at Quora, the question-and-answer website, and Snapchat, Ms Guo launched Scale AI with co-founder Alexandr Wang in 2016. The company labels the data used to develop applications for AI. The timing was perfect: OpenAI had been founded a year earlier and uses Scale AI's technology to help train ChatGPT, the generative AI chatbot. OpenAI is one of the leading lights of the new AI boom and has a valuation of $300bn. Like Ms Guo, its founder and boss Sam Altman is now a billionaire. Ms Guo left Scale AI only two years after helping to found it – 'ultimately there was a lot of friction between me and my co-founder' – but retained her stake, a decision that helped propel her into the ranks of the world's top 1pc. 'It's not like I'm flying PJs [private jets] everywhere. Just occasionally, just when other people pay for them. I'm kidding – sometimes I pay for them,' Ms Guo said, laughing. After leaving Scale AI, Ms Guo went on to set up her own venture capital fund, Backend Capital, which has so far invested in more than 100 startups. She has also run HF0, an AI business accelerator. Ms Guo is particularly passionate about supporting female entrepreneurs: 'If you take two people that are exactly the same, male and female, they come out of MIT as engineers, I think that subconsciously every investor thinks the male is going to do better, which sucks.' However, she is demanding of companies she backs. 'If you care about work-life balance, go work at Google, you'll get paid a high salary and you'll have that work-life balance,' she said. 'If you're someone that wants to build a startup, I think it's pretty unrealistic to build a venture-funded startup with work-life balance.' 'Number one party girl' Ms Guo's work-life balance has itself been the subject of tabloid attention. After leaving Scale AI she was dubbed 'Miami's number one party girl' by the New York Post for raucous celebrations held at her multimillion-dollar flat in the city's One Thousand Museum tower, which counts David Beckham among its residents. One 2022 party involved a lemur and snake rented from the Zoological Wildlife Foundation, and led to the building's homeowners' association sending a warning letter. While she still owns her residence in Miami, Ms Guo lives in Los Angeles. Alongside investing, Ms Guo has started a new business, Passes, which lets users sell access to themselves online through paid direct messages, livestreaming and subscriptions. Creators on the platform include TikTok influencer Emma Norton, actor Bella Thorne and the music producer Kygo. It is pitched as a competitor to Patreon, a platform that lets musicians and artists sell products and services directly to fans. However, the business also occupies the same space as OnlyFans, the platform known for hosting adult videos and images, and Passes has faced claims that it knowingly distributed sexually explicit material featuring minors. A legal complaint filed by OnlyFans model Alice Rosenblum claimed the platform produced, possessed and sold sexually explicit content featuring her when she was underage. The claims are strongly denied by the company. A spokesman for Passes said: 'This lawsuit is part of an orchestrated attempt to defame Passes and Ms Guo, and these claims have no basis in reality. As explained in the motion to dismiss filed on April 28, Ms Guo and Passes categorically reject the baseless allegations made against them in the lawsuit.' Scrutiny of Passes and Ms Guo herself is only likely to intensify following her crowning by Forbes. However, she is sceptical that she will hold on to the title of youngest self-made female billionaire for long. 'I have almost no doubt this title can be taken in three to six months,' she said, adding: 'Every single time it was taken, it's like, OK, there's more innovation happening – women are crushing it. 'I think I'm personally excited for someone else to take that title, because that's a sign entrepreneurship is growing.'

Lawyers warned to stop using ChatGPT to argue lawsuits after AI programs 'made up fictitious cases'
Lawyers warned to stop using ChatGPT to argue lawsuits after AI programs 'made up fictitious cases'

Daily Mail​

time8 hours ago

  • Daily Mail​

Lawyers warned to stop using ChatGPT to argue lawsuits after AI programs 'made up fictitious cases'

Lawyers in England and Wales have been warned they could face 'severe sanctions' including potential criminal prosecution if they present false material generated by AI in court. The ruling, by one of Britain's most senior judges, comes on the back of a string of cases in which which artificially intelligence software has produced fictitious legal cases and completely invented quotes. The first case saw AI fabricate 'inaccurate and fictitious' material in a lawsuit brought against two banks, The New York Times reported. Meanwhile, the second involved a lawyer for a man suing his local council who was unable to explain the origin of the nonexistent precedents in his legal argument. While large language models (LLMs) like OpenAI 's ChatGPT and Google 's Gemini are capable of producing long accurate-sounding texts, they are technically only focused on producing a 'statistically plausible' reply. The programs are also prone to what researchers call 'hallucinations' - outputs that are misleading or lack any factual basis. AI Agent and Assistance platform Vectera has monitored the accuracy of AI chatbots since 2023 and found that the top programs hallucinate between 0.7 per cent and 2.2 per cent of the time - with others dramatically higher. However, those figures become astronomically higher when the chatbots are prompted to produce longer texts from scratch, with market leader OpenAI recently acknowledging that its flagship ChatGPT system hallucinates between 51 per cent and 79 per cent of the time if asked open-ended questions. While large language models (LLMs) like OpenAI's ChatGPT and Google's Gemini are capable of producing long accurate-sounding texts, they are technically only focused on producing a 'statistically plausible' reply - which can lead to them 'hallucinating' false information Dame Victoria Sharp, president of the King's Bench Division of the High Court, and Justice Jeremy Johnson KC, authored the new ruling. In it they say: 'The referrals arise out of the actual or suspected use by lawyers of generative artificial intelligence tools to produce written legal arguments or witness statements which are not then checked, so that false information (typically a fake citation or quotation) is put before the court. 'The facts of these cases raise concerns about the competence and conduct of the individual lawyers who have been referred to this court. 'They raise broader areas of concern however as to the adequacy of the training, supervision and regulation of those who practice before the courts, and as to the practical steps taken by those with responsibilities in those areas to ensure that lawyers who conduct litigation understand and comply with their professional and ethical responsibilities and their duties to the court.' The pair argued that existing guidance around AI was 'insufficient to address the misuse of artificial intelligence'. Judge Sharp wrote: 'There are serious implications for the administration of justice and public confidence in the justice system if artificial intelligence is misused,' While acknowledging that AI remained a 'powerful technology' with legitimate use cases, she nevertheless reiterated that the technology brought 'risks as well as opportunities.' In the first case cited in the judgment, a British man sought millions in damages from two banks. The court discovered that 18 out of 45 citations included in the legal arguments featured past cases that simply did not exist. Even in instances in which the cases did exist, often the quotations were inaccurate or did not support the legal argument being presented. The second case, which dates to May 2023, involved a man who was turned down for emergency accommodation from the local authority and ultimately became homeless. His legal team cited five past cases, which the opposing lawyers discovered simply did not exist - tipped off by the fact by the US spellings and formulaic prose style. Rapid improvements in AI systems means its use is becoming a global issue in the field of law, as the judicial sector figures out how to incorporate artificial intelligence into what is frequently a very traditional, rules-bound work environment. Earlier this year a New York lawyer faced disciplinary proceedings after being caught using ChatGPT for research and citing a none-existent case in a medical malpractice lawsuit. Attorney Jae Lee was referred to the grievance panel of the 2nd U.S. Circuit Court of Appeals in February 2025 after she cited a fabricated case about a Queens doctor botching an abortion in an appeal to revive her client's lawsuit. The case did not exist and had been conjured up by OpenAI's ChatGPT and the case was dismissed. The court ordered Lee to submit a copy of the cited decision after it was not able to find the case. She responded that she was 'unable to furnish a copy of the decision.' Lee said she had included a case 'suggested' by ChatGPT but that there was 'no bad faith, willfulness, or prejudice towards the opposing party or the judicial system' in doing so. The conduct 'falls well below the basic obligations of counsel,' a three-judge panel for the Manhattan-based appeals court wrote. In June two New York lawyers were fined $5,000 after they relied on fake research created by ChatGPT for a submission in an injury claim against Avianca airline. Judge Kevin Castel said attorneys Steven Schwartz and Peter LoDuca acted in bad faith by using the AI bot's submissions - some of which contained 'gibberish' - even after judicial orders questioned their authenticity.

DOGE used flawed AI tool to ‘munch' Veteran Affairs contracts, report claims
DOGE used flawed AI tool to ‘munch' Veteran Affairs contracts, report claims

The Independent

time12 hours ago

  • The Independent

DOGE used flawed AI tool to ‘munch' Veteran Affairs contracts, report claims

Employees in the Department of Government Efficiency reportedly used a flawed artificial intelligence model to determine the necessity of contracts in the Department of Veterans Affairs, resulting in hundreds of contracts, valued at millions of dollars, being canceled. Given only 30 days to implement President Donald Trump 's executive order directing DOGE to review government contracts and grants to ensure they align with the president's policies, an engineer in DOGE rushed to create an AI to assist in the task. Engineer Sahil Lavingia wrote code which told the AI to cancel, or in his words 'munch,' anything that wasn't 'directly supporting patient care' within the agency. However neither he, nor the model, required the knowledge to make those decisions. ' 'I'm sure mistakes were made,' he told ProPublica. Mistakes are always made.' One of the key problems was that the AI only reviewed the first 10,000 characters (roughly 2,500 words) of contracts to determine whether it was 'munchable' – Lavingia's term for if the task could be done by VA staffers rather than outsourcing, ProPublica reported. Experts who reviewed the code also told ProPublica that Lavingia did not clearly define many critical terms, such as 'core medical/benefits,' and used vague instructions, leading to multiple critical contracts being flagged as 'munchable.' For example, the model was told to kill DEI programs, but the prompt failed to define what DEI was, leaving the model to decide. At another point in the code, Lavingia asked the AI to 'consider whether pricing appears reasonable' for maintenance contracts, without defining what 'reasonable' means. In addition, the AI was created on an older, general purpose model not suited for the complicated task, which caused it to hallucinate, or make up, contract amounts, sometimes believing they were worth tens of millions as opposed to thousands. Cary Coglianese, a professor at the University of Pennsylvania who studies governmental use of AI, told ProPublica that understanding which jobs could be done by a VA employee would require 'sophisticated understanding of medical care, of institutional management, of availability of human resources' – all things the AI could not do. Lavingia acknowledged the AI model was flawed, but he assured ProPublica that all 'munchable' contracts were vetted by other people. The VA initially announced, in February, it would cancel 875 contracts. But various veteran affairs advocates sounded the alarm, warning that some of those contracts related to safety inspections at VA medical facilities, direct communications with veterans about benefits, and the VA's ability to recruit doctors. One source familiar with the situation in the department told the Federal News Network that some cuts demonstrated a 'communication breakdown' between DOGE advisors, VA leaders, and lawmakers who oversee the VA. The VA soon walked that number back, instead announcing in March it would cancel approximately 585 'non-mission-critical or duplicative contracts,' re-directing around $900 million back to the agency. Lavingia, who was fired from DOGE approximately 55 days his blog and released the code he used at the VA on GitHub.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store