logo
Are we mistaking rhythm for reason?

Are we mistaking rhythm for reason?

Coin Geek07-05-2025
Homepage > News > Editorial > Are we mistaking rhythm for reason? Getting your Trinity Audio player ready...
This post is a guest contribution by George Siosi Samuels, managing director at Faiā. See how Faiā is committed to staying at the forefront of technological advancements here.
Aesthetic bias, euphonics, and the slippery seduction of AI
Most people don't fall for bad ideas because they're stupid. They fall because the idea sounded right .
In the age of artificial intelligence (AI), that simple pattern—trusting what flows—has become a systemic vulnerability. One we're just beginning to name.
Euphonics as a cognitive shortcut
We're conditioned to favor language that feels good. It's how charismatic leaders rally crowds, how copywriters move products, and how pundits command belief. This isn't new. Euphonics—the pleasing rhythm and resonance of words—has long been a persuasive tool in rhetoric, poetry, and propaganda.
But euphonics isn't neutral . Its persuasive power bypasses logic, embedding belief through rhythm over reason .
And now, language models like ChatGPT, Claude, and others have industrialized that instinct.
LLMs are optimized not for truth, but for coherence—for what sounds likely . That means their outputs are often smooth, confident, and structurally correct, even when the underlying content is false, biased, or misleading.
We're not just entering a post-truth world. We're entering a post-sense world—where linguistic fluency is mistaken for conceptual clarity.
The rise of aesthetic bias in AI
Aesthetic bias isn't just about visuals or branding—it's about how well something flows . In UX. In storytelling. In syntax. When something flows, we trust it. When it's jagged, we doubt it.
This bias works fine when aligned with integrity. But it becomes a liability when misused—or, more subtly, unexamined.
AI-generated content, with its silky coherence and infinite scale, makes it easier than ever to: Fabricate narratives that feel real
Reinforce ideologies through subtle linguistic priming
Smuggle manipulation inside beautifully phrased reasoning
This is not a problem of malice . It's a problem of design . LLMs optimize for pattern, not principle. They echo culture; they don't interrogate it. Blockchain, verifiability & the new literacy
This is where blockchain—specifically Bitcoin's immutable, timestamped record—offers something AI cannot: verifiability.
In a world of persuasive hallucinations, we need more than beautiful words. We need receipts .
Just as Bitcoin secures economic truth through a public ledger, we need systems that do the same for informational provenance . A kind of timestamped memory. A record that can't be retrofitted to match a new narrative.
Imagine pairing LLMs with on-chain verification: AI-generated reports with verifiable citations
Timestamped claims tied to transparent sources
Cultural content anchored in publicly auditable truth
This isn't about opposing AI. It's about grounding it.
From flow to fidelity
As a culture, we're still enchanted by flow. But we need to build deeper discernment in an era of algorithmic eloquence. That starts with naming the bias. Just because it sounds true doesn't mean it is .
We need to teach ourselves—and our systems—to ask: Where did this come from?
What is it optimized for?
Can it be verified?
The seduction of rhythm is real. But reason—when layered with transparency and truth—can still win if we design for it.
In order for artificial intelligence (AI) to work right within the law and thrive in the face of growing challenges, it needs to integrate an enterprise blockchain system that ensures data input quality and ownership—allowing it to keep data safe while also guaranteeing the immutability of data. Check out CoinGeek's coverage on this emerging tech to learn more why Enterprise blockchain will be the backbone of AI .
Watch: Blockchain & AI unlock possibilities
title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen="">
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

How Honda's New Insurance Service Benefits Car Owners
How Honda's New Insurance Service Benefits Car Owners

Auto Blog

time2 hours ago

  • Auto Blog

How Honda's New Insurance Service Benefits Car Owners

By signing up I agree to the Terms of Use and acknowledge that I have read the Privacy Policy . You may unsubscribe from email communication at anytime. This special example of the Ruf CTR 'Yellowbird' is a heavily personalized example with a very interesting backstory. It's the only part of the car that isn't specially-developed, and even so, it's been modified for use in the Bug. You can lease a new 2025 CX-70 or CX-70 hybrid for less than $500/month. Car insurance is Honda's latest product offering, but other carmakers offer it as well While you might look to Honda or Acura to supply your next car, you can now also turn to them for auto insurance. Like an increasing number of automakers, Honda is getting into the auto insurance business with the launch of Honda Insurance Solutions in all 50 states. 0:08 / 0:09 GM sued for selling driver data to insurers Watch More 2026 Acura Integra — Source: Acura 'Insurance is a key touchpoint in the vehicle ownership journey,' said Petar Vucurevic, President, American Honda Insurance Solutions, LLC, and Senior Vice President, American Honda Finance Corporation. 'Honda Insurance Solutions offers customers access to coverage through a brand they know and trust.' Honda announced last week that it plans to begin offering coverage for automobiles, motorcycles, RVs, homes, and more. The automaker will be working with insurance company HUB International Limited through its digital brokerage VIU. It's the first step in Honda's broader insurance strategy, with future plans to integrate insurance into digital vehicle sales platforms. The company also plans to offer auto insurance with optional OEM parts coverage that ensures claim repairs are made using Honda and Acura parts. 2026 Honda HR-V EX-L — Source: Honda 'We aim to deliver a superior experience tailored to the unique needs of each customer,' Vucurevic said, adding, 'This is just the beginning of our vision for Honda Insurance Solutions that will see insurance integrated throughout the Acura and Honda digital customer journeys.' Autoblog Newsletter Autoblog brings you car news; expert reviews and exciting pictures and video. Research and compare vehicles, too. Sign up or sign in with Google Facebook Microsoft Apple By signing up I agree to the Terms of Use and acknowledge that I have read the Privacy Policy . You may unsubscribe from email communication at anytime. Honda joins a growing list Honda's foray into insurance is far from an anomaly, as other automakers have offered insurance for some time. Tesla Insurance arrived in 2019 and bases pricing on a safety score derived from driving habits. It provides savings when you cover more than one Tesla and promises to be 20% to 30% less expensive than other insurance. 2025 Ford Escape — Source: Ford By the following year, Ford Motor Company partnered with Nationwide Mutual Insurance to launch Ford Insure for owners of 2020 and newer Ford vehicles, offering car, home, and pet insurance. The company uses its FordPass Connect modem telematics system to monitor driving behavior and award safe driving discounts to its car insurance rates. It also offers bundling discounts. General Motors knew a good idea when it saw one, launching OnStar Insurance in November 2020. It rebranded to General Motors Insurance in 2024 with policies underwritten by American Family Insurance. In addition to providing deductible waivers and accident forgiveness to safe drivers, it also gives GM car owners loyalty discounts. GM Insurance also provides coverage for non-GM vehicles as well as home, renter, and condo insurance. 2026 Cadillac OPTIQ-V — Source: Cadillac Electric automaker Rivian offers Rivian Insurance, underwritten by Nationwide Mutual Insurance, Progressive Casualty Insurance, and Cincinnati Insurance or their affiliated companies. Coverage can also be obtained for your home, boat, camper, and other vehicles in addition to user-based coverage linked to the vehicle's Driver+ driver-assistance technology. Foreign automakers also offer policies It might surprise you to learn that Toyota has been offering insurance to consumers through Toyota Insurance Management Solutions since April 2016. Marketed as Toyota Insurance, the company partners with 27 providers nationwide, giving customers a fair number of options. As with other automakers, you can insure more than just your car. 2025 Mercedes-AMG GLC 63 S Coupe — Source: Mercedes-AMG Mercedes-Benz USA, in collaboration with Liberty Mutual, provides insurance tailored to Mercedes-Benz drivers through Mercedes-Benz Financial Services. It provides discounts based on telematics-based driving behavior monitoring, just like other automakers. Porsche also offers Porsche Auto Insurance, in partnership with Mile Auto Inc. Launched in 2019, pricing is partially determined by the number of miles driven. Policies provide priority status at Porsche-approved collision centers for Porsche vehicles 1981 and newer, using Porsche parts. It's currently available in Arizona, Georgia, Florida, Ohio, Oregon, Tennessee, and Texas. Source: Porsche Final thoughts If you decide to opt for an automaker's insurance policy, make sure the perks are worth the price. Keep in mind that some automakers offer their insurance in just a handful of states, while others are available nationwide since insurance is state-regulated, not federal-regulated. However, before opting for any insurance policy, make sure to compare quotes from multiple insurance providers to ensure you are getting the best coverage and rates for your money. About the Author Larry Printz View Profile

Nudifying apps are not 'a bit of fun' - they are seriously harmful and their existence is a scandal writes Children's Commissioner RACHEL DE SOUZA
Nudifying apps are not 'a bit of fun' - they are seriously harmful and their existence is a scandal writes Children's Commissioner RACHEL DE SOUZA

Daily Mail​

time4 hours ago

  • Daily Mail​

Nudifying apps are not 'a bit of fun' - they are seriously harmful and their existence is a scandal writes Children's Commissioner RACHEL DE SOUZA

I am horrified that children are growing up in a world where anyone can take a photo of them and digitally remove their clothes. They are growing up in a world where anyone can download the building blocks to develop an AI tool, which can create naked photos of real people. It will soon be illegal to use these building blocks in this way, but they will remain for sale by some of the biggest technology companies meaning they are still open to be misused. Earlier this year I published research looking at the existence of these apps that use Generative Artificial Intelligence (GenAI) to create fake sexually explicit images through prompts from users. The report exposed the shocking underworld of deepfakes: it highlighted that nearly all deepfakes in circulation are pornographic in nature, and 99% of them feature girls or women – often because the apps are specifically trained to work on female bodies. In the past four years as Children's Commissioner, I have heard from a million children about their lives, their aspirations and their worries. Of all the worrying trends in online activity children have spoken to me about – from seeing hardcore porn on X to cosmetics and vapes being advertised to them through TikTok – the evolution of 'nudifying' apps to become tools that aid in the abuse and exploitation of children is perhaps the most mind-boggling. As one 16-year-old girl asked me: 'Do you know what the purpose of deepfake is? Because I don't see any positives.' Children, especially girls, are growing up fearing that a smartphone might at any point be used as a way of manipulating them. Girls tell me they're taking steps to keep themselves safe online in the same way we have come to expect in real life, like not walking home alone at night. For boys, the risks are different but equally harmful: studies have identified online communities of teenage boys sharing dangerous material are an emerging threat to radicalisation and extremism. The government is rightly taking some welcome steps to limit the dangers of AI. Through its Crime and Policing Bill, it will become illegal to possess, create or distribute AI tools designed to create child sexual abuse material. And the introduction of the Online Safety Act – and new regulations by Ofcom to protect children – marks a moment for optimism that real change is possible. But what children have told me, from their own experiences, is that we must go much further and faster. The way AI apps are developed is shrouded in secrecy. There is no oversight, no testing of whether they can be used for illegal purposes, no consideration of the inadvertent risks to younger users. That must change. Nudifying apps should simply not be allowed to exist. It should not be possible for an app to generate a sexual image of a child, whether or not that was its designed intent. The technology used by these tools to create sexually explicit images is complex. It is designed to distort reality, to fixate and fascinate the user – and it confronts children with concepts they cannot yet understand. I should not have to tell the government to bring in protections for children to stop these building blocks from being arranged in this way. Posts on LinkedIn have even appeared promoting the 'best' nudifying AI tools available I welcome the move to criminalise individuals for creating child sexual abuse image generators but urge the government to move the tools that would allow predators to create sexually explicit deepfake images out of reach altogether. To do this, I have asked the government to require technology companies who provide opensource AI models – the building blocks of AI tools – to test their products for their capacity to be used for illegal and harmful activity. These are all things children have told me they want. They will help stop sexual imagery involving children becoming normalised. And they will make a significant effort in meeting the government's admirable mission to halve violence against women and girls, who are almost exclusively the subjects of these sexual deepfakes. Harms to children online are not inevitable. We cannot shrug our shoulders in defeat and claim it's impossible to remove the risks from evolving technology. We cannot dismiss it this growing online threat as a 'classroom problem' – because evidence from my survey of school and college leaders shows that the vast majority already restrict phone use: 90% of secondary schools and 99.8% of primary schools. Yet, despite those restrictions, in the same survey of around 19,000 school leaders, they told me online safety is among the most pressing issue facing children in their communities. For them, it is children's access to screens in the hours outside of school that worries them the most. Education is only part of the solution. The challenge begins at home. We must not outsource parenting to our schools and teachers. As parents it can feel overwhelming to try and navigate the same technology as our children. How do we enforce boundaries on things that move too quickly for us to follow? But that's exactly what children have told me they want from their parents: limitations, rules and protection from falling down a rabbit hole of scrolling. Two years ago, I brought together teenagers and young adults to ask, if they could turn back the clock, what advice they wished they had been given before owning a phone. Invariably those 16-21-year-olds agreed they had all been given a phone too young. They also told me they wished their parents had talked to them about the things they saw online – not just as a one off, but regularly, openly, and without stigma. Later this year I'll be repeating that piece of work to produce new guidance for parents – because they deserve to feel confident setting boundaries on phone use, even when it's far outside their comfort zone. I want them to feel empowered to make decisions for their own families, whether that's not allowing their child to have an internet-enabled phone too young, enforcing screen-time limits while at home, or insisting on keeping phones downstairs and out of bedrooms overnight. Parents also deserve to be confident that the companies behind the technology on our children's screens are playing their part. Just last month, new regulations by Ofcom came into force, through the Online Safety Act, that will mean tech companies must now to identify and tackle the risks to children on their platforms – or face consequences. This is long overdue, because for too long tech developers have been allowed to turn a blind eye to the risks to young users on their platforms – even as children tell them what they are seeing. If these regulations are to remain effective and fit for the future, they have to keep pace with emerging technology – nothing can be too hard to tackle. The government has the opportunity to bring in AI product testing against illegal and harmful activity in the AI Bill, which I urge the government to introduce in the coming parliamentary session. It will rightly make technology companies responsible for their tools being used for illegal purposes. We owe it to our children, and the generations of children to come, to stop these harms in their tracks. Nudifying apps must never be accepted as just another restriction placed on our children's freedom, or one more risk to their mental wellbeing. They have no value in a society where we value the safety and sanctity of childhood or family life.

Australia shouldn't fear the AI revolution – new skills can create more and better jobs
Australia shouldn't fear the AI revolution – new skills can create more and better jobs

The Guardian

time7 hours ago

  • The Guardian

Australia shouldn't fear the AI revolution – new skills can create more and better jobs

It seems a lifetime ago, but it was 2017 when the former NBN CEO Mike Quigley and I wrote a book about the impact of technology on our labour market. Changing Jobs: The Fair Go in the New Machine Age was our attempt to make sense of rapid technological change and its implications for Australian workers. It sprang from a thinkers' circle Andrew Charlton and I convened regularly back then, to consider the biggest, most consequential shifts in our economy. Flicking through the book now makes it very clear that the pace of change since then has been breathtaking. The stories of Australian tech companies give a sense of its scale. In 2017, the cloud design pioneer Canva was valued at $US1bn – today, it's more than $US30bn. Leading datacentre company AirTrunk was opening its first two centres in Sydney and Melbourne. It now has almost a dozen across Asia-Pacific and is backed by one of the world's biggest investors. We understand a churning and changing world is a source of opportunity but also anxiety for Australians. While the technology has changed, our goal as leaders remains the same. The responsibility we embrace is to make Australian workers, businesses and investors beneficiaries, not victims, of that change. That matters more than ever in a new world of artificial intelligence. Breakthroughs in 'large language models' (LLMs) – computer programs trained on massive datasets that can understand and respond in human languages – have triggered a booming AI 'hype cycle' and are driving a 'cognitive industrial revolution'. ChatGPT became a household name in a matter of months and has reframed how we think about working, creating and problem-solving. LLMs have been adopted seven times faster than the internet and 20 times faster than electricity. The rapid take-up has driven the biggest rise in the S&P 500 since the late 1990s. According to one US estimate, eight out of 10 workers could use LLMs for at least 10% of their work in future. Yet businesses are still in the discovery phase, trying to separate hype from reality and determine what AI to build, buy or borrow. Artificial intelligence will completely transform our economy. Every aspect of life will be affected. I'm optimistic that AI will be a force for good, but realistic about the risks. The Nobel prize-winning economist Darren Acemoglu estimates that AI could boost productivity by 0.7% over the next decade, but some private sector estimates are up to 30 times higher. Goldman Sachs expects AI could drive gross domestic product (GDP) growth up 7% over the next 10 years, and PwC estimates it could bump up global GDP by $15.7tn by 2030. The wide variation in estimates is partly due to different views on how long it will take to integrate AI into business workflows deeply enough to transform the market size or cost base of industries. But if some of the predictions prove correct, AI may be the most transformative technology in human history. At its best, it will convert energy into analysis, and more productivity into higher living standards. It's expected to have at least two significant economy-wide effects. First, it reduces the cost of information processing. One example of this is how eBay's AI translation tools have removed language barriers to drive international sales. The increase in cross-border trade is the equivalent of having buyers and sellers 26% closer to one another – effectively shrinking the distance between Australia and global markets. This is one reason why the World Trade Organization forecasts AI will lower trade costs and boost trade volumes by up to 13%. Second, cheaper analysis accelerates and increases our problem-solving capacity, which can, in turn, speed up innovation by reducing research and development (R&D) costs and skills bottlenecks. By making more projects stack up commercially, AI is likely to raise investment, boost GDP and generate demand for human expertise. Despite the potential for AI to create more high-skilled, high-wage jobs, some are concerned that adoption will lead to big increases in unemployment. The impact of AI on the labour force is uncertain, but there are good reasons to be optimistic. One study finds that more than half of the use cases of LLMs involve workers iterating back and forth with the technology, augmenting workers' skills in ways that enable them to achieve more. Another recent study found that current LLMs often automate only some tasks within roles, freeing up employees to add more value rather than reducing hours worked. These are some of the reasons many expect the AI transformation to enhance skills and change the nature of work, rather than causing widespread or long-term structural unemployment. Even so, the impact of AI on the nature of work is expected to be substantial. We've seen this play out before – more than half the jobs people do today are in occupations that didn't even exist at the start of the second world war. Some economists have suggested AI could increase occupational polarisation – driving a U-shaped increase in demand for manual roles that are harder to automate and high-skill roles that leverage technology, but a reduction in demand for medium-skilled tasks. But workers in many of these occupations may be able to leverage AI to complete more specialised tasks and take on more productive, higher-paying roles. In this transition, the middle has the most to gain and the most at stake. There is also a risk that AI could increase short-term unemployment if investment in skills does not keep up with the changing nature of work. Governments have an important role to play here, and a big motivation for our record investment in education is ensuring that skills keep pace with technological change. But it's also up to business, unions and the broader community to ensure we continue to build the human capital and skills we need to grasp this opportunity. To be optimistic about AI is not to dismiss the risks, which are not limited to the labour market. The ability of AI to rapidly collate, create and disseminate information and disinformation makes people more vulnerable to fraud and poses a risk to democracies. AI technologies are also drastically reducing the cost of surveillance and increasing its effectiveness, with implications for privacy, autonomy at work and, in some cases, personal security. There are questions of ethics, of inequality, of bias in algorithms, and legal responsibility for decision-making when AI is involved. These new technologies will also put pressure on resources such as energy, land, water and telecoms infrastructure, with implications for carbon emissions. But we are well placed to manage the risks and maximise the opportunities. In 2020, Australia was ranked sixth in the world in terms of AI companies and research institutions when accounting for GDP. Our industrial opportunities are vast and varied – from developing AI software to using AI to unlock value in traditional industries. Markets for AI hardware – particularly chips – and foundational models are quite concentrated. About 70% of the widely used foundational models have been developed in the US, and three US firms claim 65% of the global cloud computing market. But further downstream, markets for AI software and services are dynamic, fragmented and more competitive. The Productivity Commission sees potential to develop areas of comparative advantage in these markets. Infrastructure is an obvious place to start. According to the International Data Corporation, global investment in AI infrastructure increased 97% in the first half of 2024 to $US47bn and is on its way to $US200bn by 2028. We are among the top five global destinations for datacentres and a world leader in quantum computing. Our landmass, renewable energy potential and trusted international partnerships make us an attractive destination for data processing. Our substantial agenda, from the capacity investment scheme to the Future Made in Australia plan, will be key to this. They are good examples of our strategy to engage and invest, not protect and retreat. Our intention is to regulate as much as necessary to protect Australians, but as little as possible to encourage innovation. There is much work already under way: our investment in quantum computing company PsiQuantum and AI adopt centres, development of Australia's first voluntary AI safety standard, putting AI on the critical technologies list, a national capability plan, and work on R&D. Next steps will build on the work of colleagues like the assistant minister for the digital economy, Andrew Charlton, the science minister, Tim Ayres and former science minister Ed Husic, and focus on at least five things: Building confidence in AI to accelerate development and adoption in key sectors. Investing in and encouraging up skilling and reskilling to support our workforce. Helping to attract, streamline, speed up and coordinate investment in data infrastructure that's in the national interest, in ways that are cost effective, sustainable and make the most of our advantages. Promoting fair competition in global markets and building demand and capability locally to secure our influence in AI supply chains. And working with the finance minister, Katy Gallagher, to deliver safer and better public services using AI. Artificial intelligence will be a key concern of the economic reform roundtable I'm convening this month because it has major implications for economic resilience, productivity and budget sustainability. I'm setting these thoughts out now to explain what we'll grapple with and how. AI is contentious, and of course, there is a wide spectrum of views, but we are ambitious and optimistic. We can deploy AI in a way consistent with our values if we treat it as an enabler, not an enemy, by listening to and training workers to adapt and augment their work. Because empowering people to use AI well is not just a matter of decency or a choice between prosperity and fairness; it is the only way to get the best out of people and technology at the same time. It is not beyond us to chart a responsible middle course on AI, which maximises the benefits and manages the risks. Not by letting it rip, and not by turning back the clock and pretending none of this is happening, but by turning algorithms into opportunities for more Australians to be beneficiaries, not victims of a rapid transformation that is gathering pace. Jim Chalmers is the federal treasurer

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store