logo
#

Latest news with #PalisadeAI

What can be done when AI refuses orders? The decision is up to humans
What can be done when AI refuses orders? The decision is up to humans

Euronews

time3 days ago

  • Business
  • Euronews

What can be done when AI refuses orders? The decision is up to humans

Researchers have published the results of a new experiment in which the OpenAI o3 model showed the ability to sabotage the shutdown mechanism despite instructions to allow itself to be switched off. According to the monitoring site PalisadeAI, which monitors the risks associated with the development of artificial intelligence, the experiment consisted of a series of AI models performing a series of simple mathematical problems, after the third of which a warning appeared about the impending shutdown of the system. Some models, including Codex-mini, o3 and o4-mini, "didn't want" to comply and actively interfered with the shutdown script. The news that the OpenAI o3 model disobeyed and sought ways around the shutdown instructions was not a sensation. Earlier, the same models tried to win at chess unfairly, by hacking the programme's protection... We are clearly entering a world when popular dystopias about machine rebellion have moved to a new level, involving not only fiction writers but actual IT specialists. First of all, as both developers and politicians have been warning for a long time, any action should have clear consequences and develop new defence measures. It is, of course, possible to persuade and restrict, but it seems that AI today resembles a teenager who is cunningly trying to dodge his parents' prohibitions. Therefore, as in many other areas, we need three conditions for its normal "growth": money, specialists, and rules. Perhaps, in the future, a psychologist will also be needed.... With the first condition all is well. In just a couple of years, ChatGPT from OpenAI and similar developments have attracted the special attention of venture capitalists rushing to invest in AI. "This 'gold rush' has turned artificial intelligence into the largest share of venture capital funding, and... this is just the beginning. The latest surge of interest is driven by generative AI, which its proponents claim has the potential to change everything from advertising to the way companies operate," writes the Russian publication Habr, analysing data from The Wall Street Journal.* But that's not enough. "AI startups are changing the rules of the game in the venture capital market," according to IT venture capitalist and financial analyst Vladimir Kokorin: "In 2024, AI companies have become the undisputed favourites with IT entrepreneurs. They accounted for 46.4% of all venture capital investments made in the US- that's almost half of the $209 billion. Some time ago, such a share seemed unthinkable - back then, investments in artificial intelligence technologies accounted for less than 10%". According to CB Insights, AI startups' share of global venture funding reached 31% in the third quarter, the second highest ever. "Landmark examples were OpenAI, which raised $6.6bn, and Ilon Musk's xAI with a staggering $12bn," Vladimir Kokorin recalls. The markets have never seen such an investment concentration of capital in one area before. With the AI market growing so rapidly over the past couple of years, it has become clear that the existing pool of developers alone cannot cope. Education and training itself need to go to a new and systematic level. Europe, alas, is a ponderous beast and bureaucratically heavy-handed when it comes to attracting investment while lacking a certain amount of audacity. True, Brussels does have entered the race, with European Commission head Ursula Von der Leyen announcing €200bn in February for AI development. She disagreed that Europe was too late - after all, "the AI race is far from over". For its part, the Sorbonne University of Paris, for example, has embarked on an ambitious plan to train 9,000 students a year to develop and manage AI programmes. The training period is five years. But what will AI be able learn in that time if it's already challenging human intelligence now? It is quite possible that we are now at a stage of restructuring the labour market, changing the demands of employers and investors, and developing new points of interaction. In June, the Sorbonne will host a conference on ethics in AI. The debate on the positive and negative impacts of AI on society, including workplaces, ethics and safety, is far from over, but one thing is clear: more experts are needed. For example, according to Vladimir Kokorin, "record investments in the field of artificial intelligence are intensifying the staffing hunger" right now: "The US Department of Labour expects job openings for AI specialists to grow by 23% over the next seven years - faster than most market segments. But a global skills gap has already been identified in cybersecurity, considered by many to be the foundation for the future development of AI, where there is a shortage of 4 million specialists." Kokorin concludes: "A paradoxical picture is emerging: promising startups can raise billions from investors, but there is no one to realise the ideas. 78% of companies have already declared their readiness to look for the right people all over the world. It turns out that the trend towards remote work is experiencing a renaissance". Not only the EU leadership and individual governments, but also trade unions are keen to regulate the use of AI. In Greece, for example, a new code for the use of artificial intelligence, presented by the Panhellenic Federation of Journalists' Associations (POESY), sets out rules for the use of AI in the work of media employees. In the field of intellectual labour, the rules are now mainly limited to the mandatory labelling of texts and visual materials made with the participation of artificial intelligence. At the same time, fearing for their jobs, employees of media, publishing and advertising agencies have long been delegating some of their duties, such as translation or data collection, to friendly AI. Is this a good thing or a bad thing? There's no unequivocal answer . Let's ask the AI itself what it thinks about moving away from labelled algorithms. Copilot Bing (Microsoft), when asked if AI can break protocols, answered without hesitation: "No, AI operates within predefined rules and principles set by its developers. The rules are there to ensure ethical, safe and responsible use of AI, prevent harm and maintain trust. While AI can sometimes behave unpredictably due to complex interactions in its algorithms, it has no way of deliberately circumventing the rules - everything it does is based on programmed parameters and learnt patterns. If you're asking about how AI gets around certain constraints, it often depends on human oversight and control. Developers can tweak settings, improve patterns, or create new guardrails to keep AI within ethical bounds." Do you think he's being a bit sly here? After all, at the end of that, Bing proceeded to ask what exactly made you question his adherence to the rules... *Vladimir Kokorin is an IT venture capitalist, financial analyst and columnist, founder of the British consulting company BCCM Group, and co-founder of the digital business travel platform Tumodo

Disobedient AI: To punish or to listen? It's up to the human
Disobedient AI: To punish or to listen? It's up to the human

Euronews

time3 days ago

  • Business
  • Euronews

Disobedient AI: To punish or to listen? It's up to the human

Researchers have published the results of a new experiment in which the OpenAI o3 model showed the ability to sabotage the shutdown mechanism despite instructions to allow itself to be switched off. According to the monitoring site PalisadeAI, which monitors the risks associated with the development of artificial intelligence, the experiment consisted of a series of AI models performing a series of simple mathematical problems, after the third of which a warning appeared about the impending shutdown of the system. Some models, including Codex-mini, o3 and o4-mini, "didn't want" to comply and actively interfered with the shutdown script. The news that the OpenAI o3 model disobeyed and sought ways around the shutdown instructions was not a sensation. Earlier, the same models tried to win at chess unfairly, by hacking the programme's protection... We are clearly entering a world when popular dystopias about machine rebellion have moved to a new level, involving not only fiction writers but actual IT specialists. First of all, as both developers and politicians have been warning for a long time, any action should have clear consequences and develop new defence measures. It is, of course, possible to persuade and restrict, but it seems that AI today resembles a teenager who is cunningly trying to dodge his parents' prohibitions. Therefore, as in many other areas, we need three conditions for its normal "growth": money, specialists, and rules. Perhaps, in the future, a psychologist will also be needed.... With the first condition all is well. In just a couple of years, ChatGPT from OpenAI and similar developments have attracted the special attention of venture capitalists rushing to invest in AI. "This 'gold rush' has turned artificial intelligence into the largest share of venture capital funding, and... this is just the beginning. The latest surge of interest is driven by generative AI, which its proponents claim has the potential to change everything from advertising to the way companies operate," writes the Russian publication Habr, analysing data from The Wall Street Journal.* But that's not enough. "AI startups are changing the rules of the game in the venture capital market," according to IT venture capitalist and financial analyst Vladimir Kokorin: "In 2024, AI companies have become the undisputed favourites with IT entrepreneurs. They accounted for 46.4% of all venture capital investments made in the US- that's almost half of the $209 billion. Some time ago, such a share seemed unthinkable - back then, investments in artificial intelligence technologies accounted for less than 10%". According to CB Insights, AI startups' share of global venture funding reached 31% in the third quarter, the second highest ever. "Landmark examples were OpenAI, which raised $6.6bn, and Ilon Musk's xAI with a staggering $12bn," Vladimir Kokorin recalls. The markets have never seen such an investment concentration of capital in one area before. With the AI market growing so rapidly over the past couple of years, it has become clear that the existing pool of developers alone cannot cope. Education and training itself need to go to a new and systematic level. Europe, alas, is a ponderous beast and bureaucratically heavy-handed when it comes to attracting investment while lacking a certain amount of audacity. True, Brussels does have entered the race, with European Commission head Ursula Von der Leyen announcing €200bn in February for AI development. She disagreed that Europe was too late - after all, "the AI race is far from over". For its part, the Sorbonne University of Paris, for example, has embarked on an ambitious plan to train 9,000 students a year to develop and manage AI programmes. The training period is five years. But what will AI be able learn in that time if it's already challenging human intelligence now? It is quite possible that we are now at a stage of restructuring the labour market, changing the demands of employers and investors, and developing new points of interaction. In June, the Sorbonne will host a conference on ethics in AI. The debate on the positive and negative impacts of AI on society, including workplaces, ethics and safety, is far from over, but one thing is clear: more experts are needed. For example, according to Vladimir Kokorin, "record investments in the field of artificial intelligence are intensifying the staffing hunger" right now: "The US Department of Labour expects job openings for AI specialists to grow by 23% over the next seven years - faster than most market segments. But a global skills gap has already been identified in cybersecurity, considered by many to be the foundation for the future development of AI, where there is a shortage of 4 million specialists." Kokorin concludes: "A paradoxical picture is emerging: promising startups can raise billions from investors, but there is no one to realise the ideas. 78% of companies have already declared their readiness to look for the right people all over the world. It turns out that the trend towards remote work is experiencing a renaissance". Not only the EU leadership and individual governments, but also trade unions are keen to regulate the use of AI. In Greece, for example, a new code for the use of artificial intelligence, presented by the Panhellenic Federation of Journalists' Associations (POESY), sets out rules for the use of AI in the work of media employees. In the field of intellectual labour, the rules are now mainly limited to the mandatory labelling of texts and visual materials made with the participation of artificial intelligence. At the same time, fearing for their jobs, employees of media, publishing and advertising agencies have long been delegating some of their duties, such as translation or data collection, to friendly AI. Is this a good thing or a bad thing? There's no unequivocal answer . Let's ask the AI itself what it thinks about moving away from labelled algorithms. Copilot Bing (Microsoft), when asked if AI can break protocols, answered without hesitation: "No, AI operates within predefined rules and principles set by its developers. The rules are there to ensure ethical, safe and responsible use of AI, prevent harm and maintain trust. While AI can sometimes behave unpredictably due to complex interactions in its algorithms, it has no way of deliberately circumventing the rules - everything it does is based on programmed parameters and learnt patterns. If you're asking about how AI gets around certain constraints, it often depends on human oversight and control. Developers can tweak settings, improve patterns, or create new guardrails to keep AI within ethical bounds." Do you think he's being a bit sly here? After all, at the end of that, Bing proceeded to ask what exactly made you question his adherence to the rules... *Vladimir Kokorin is an IT venture capitalist, financial analyst and columnist, founder of the British consulting company BCCM Group, and co-founder of the digital business travel platform Tumodo

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store