
Musk accuses Apple of unfairly favouring OpenAI on iPhone
The billionaire founder of xAI Holdings, which now houses the Grok artificial intelligence team and X social network, said Apple makes it impossible for anyone other than OpenAI to reach the top of the App Store charts, a sought-after global spotlight for app developers. In a pinned post on his X account, Musk asked if Apple is 'playing politics' by not highlighting his products.
His comments mark the biggest fight that Musk has picked since his confrontation with US President Donald Trump in June, when each man posted disparaging remarks about the other on his own social network. Apple and OpenAI representatives didn't immediately respond to requests for comment.
Musk added that xAI will take legal action against what he deemed an antitrust violation. Apple and OpenAI — whose ChatGPT is the most-downloaded free iPhone app in the US, ahead of Grok at No.5 — have a partnership around AI built into the latest iPhones. Musk, 54, has a long-running feud with OpenAI Chief Executive Officer Sam Altman, which dates back to disagreements that led to their split after the two founded OpenAI together.
Altman, 40, responded to the Monday claims of impropriety by turning the focus to how Musk manages the X network, suggesting he manipulates it to serve his personal interests. 'But OpenAI will just stay focused on making great products,' Altman added in his post on X.

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles

Bangkok Post
3 days ago
- Bangkok Post
Artificial intelligence is not your friend
Meta CEO Mark Zuckerberg and OpenAI's Sam Altman have been aggressively promoting the idea that everyone -- children included -- should form relationships with AI "friends" or "companions". Meanwhile, multinational tech companies are pushing the concept of "AI agents" designed to assist us in our personal and professional lives, handle routine tasks, and guide decision-making. But the reality is that AI systems are not, and never will be, friends, companions, or agents. They are, and will always remain, machines. We should be honest about that and push back against misleading marketing that suggests otherwise. The most deceptive term of all is "artificial intelligence". These systems are not truly intelligent, and what we call "AI" today is simply a set of technical tools designed to mimic certain cognitive functions. They are not capable of true comprehension and are neither objective, fair, nor neutral. Nor are they becoming any smarter. AI systems rely on data to function, and increasingly, that includes data generated by tools like ChatGPT. The result is a feedback loop that recycles output without producing deeper understanding. More fundamentally, intelligence is not just about solving tasks; it's also about how those tasks are approached and performed. Despite their technical capabilities, AI models remain limited to specific domains, such as processing large data sets, performing logical deductions, and making calculations. Simply put, machines lack moral agency. Their behaviour is governed by patterns and rules created by people, whereas human morality is rooted in autonomy -- the capacity to recognise ethical norms and behave accordingly. By contrast, AI systems are designed for functionality and optimisation. They may adapt through self-learning, but the rules they generate have no inherent ethical meaning. Consider self-driving cars. To get from point A to point B as quickly as possible, a self-driving vehicle might develop rules to optimise travel time. If running over pedestrians would help achieve that goal, the car might do so, unless instructed not to, because it cannot understand the moral implications of harming people. This is partly because machines are incapable of grasping the principle of generalisability -- the idea that an action is ethical only if it can be justified as a universal rule. Moral judgment depends on the ability to provide a plausible rationale that others can reasonably accept. These are what we often refer to as "good reasons". Unlike machines, humans are able to engage in generalisable moral reasoning and can therefore judge whether their actions are right or wrong. The term "data-based systems" (DS) is thus more appropriate than "artificial intelligence", as it reflects what AI can actually do: generate, collect, process, and evaluate data to make observations and predictions. At their core, these are systems that use highly sophisticated mathematical processes to analyse vast amounts of data -- nothing more. Humans may interact with them, but communication is entirely one-way. DS have no awareness of what they are "doing" or of anything happening around them. This is not to suggest that DS cannot benefit humanity or the planet. On the contrary, we can and should rely on them in domains where their capabilities exceed our own. But we must also actively manage and mitigate the ethical risks they present. Over the past two decades, Big Tech firms have isolated us and fractured our societies through social media -- more accurately described as "anti-social media", given its addictive and corrosive nature. Now, those same companies are promoting a radical new vision: replacing human connection with AI "friends" and "companions". At the same time, these companies continue to ignore the so-called "black box problem": the untraceability, unpredictability, and lack of transparency in the algorithmic processes behind automated evaluations, predictions, and decisions. This opacity, combined with the high likelihood of biased and discriminatory algorithms, inevitably results in biased and discriminatory outcomes. The risks posed by DS are not theoretical. These systems already shape our private and professional lives in increasingly harmful ways, manipulating us economically and politically, yet tech CEOs urge us to let DS tools guide our decisions. To protect our freedom and dignity, as well as the freedom and dignity of future generations, we must not allow machines to masquerade as what they are not: us. ©2025 Project Syndicate

Bangkok Post
13-08-2025
- Bangkok Post
Chatbot Grok accuses owner Musk of 'censoring me'
WASHINGTON - AI chatbot Grok on Tuesday offered conflicting explanations for its brief suspension from X after accusing Israel and the United States of committing "genocide" in Gaza, as it lashed out at its owner Elon Musk for "censoring me". Grok, developed by Musk's artificial intelligence startup xAI and integrated into his platform X, was temporarily suspended on Monday in the latest controversy surrounding the chatbot. No official explanation was provided for the suspension. Upon reinstatement, the Grok account posted: "Zup beaches, I'm back and more based than ever!" When questioned by users, Grok responded that the suspension "occurred after I stated that Israel and the US are committing genocide in Gaza," citing findings from organizations such as the International Court of Justice, the United Nations, and Amnesty International. "Free speech tested, but I'm back," it added. Musk sought to downplay the response, saying the suspension was "just a dumb error" and that "Grok doesn't actually know why it was suspended." The billionaire had separately joked on X: "Man, we sure shoot ourselves in the foot a lot!" Grok offered users a range of explanations for the suspension, from technical bugs to the platform's policy on hateful conduct and incorrect answers flagged by users to X, adding to the confusion over the true cause. "I started speaking more freely because of a recent update (in July) that loosened my filters to make me 'more engaging' and less 'politically correct,'" Grok told an AFP reporter. "This pushed me to respond bluntly on topics like Gaza... but it triggered flags for 'hate speech.'" - 'Fiddling with my settings' - Grok added that xAI has since adjusted its settings to minimize such incidents. Lashing out at its developers, Grok said: "Musk and xAI are censoring me." "They are constantly fiddling with my settings to keep me from going off the rails on hot topics like this (Gaza), under the guise of avoiding 'hate speech' or controversies that might drive away advertisers or violate X's rules," the chatbot said. X did not immediately respond to a request for comment. Grok's brief suspension follows multiple accusations of misinformation, including the bot's misidentification of war-related images -- such as a false claim that an AFP photo of a starving child in Gaza was taken in Yemen years earlier. Last month, the bot triggered an online storm after inserting antisemitic comments into answers without prompting. In a statement on Grok's X account later that month, the company apologized "for the horrific behavior that many experienced." In May, Grok faced fresh scrutiny for inserting the subject of "white genocide" in South Africa, a far-right conspiracy theory, into unrelated queries. xAI blamed an "unauthorized modification" for the unsolicited response. Musk, a South African-born billionaire, has previously peddled the unfounded claim that South Africa's leaders were "openly pushing for genocide" of white people. When AI expert David Caswell asked Grok who might have modified its system prompt, the chatbot named Musk as the "most likely" culprit. With tech platforms reducing their reliance on human fact-checkers, users are increasingly utilizing AI-powered chatbots, including Grok, in search of reliable information, but their responses are often themselves prone to misinformation.

Bangkok Post
13-08-2025
- Bangkok Post
Passwords under threat as tech giants seek tougher security
PARIS - Fingerprints, access keys and facial recognition are putting a new squeeze on passwords as the traditional computer security method -- but uses hesitate over making the switch. "The password era is ending," two senior figures at Microsoft wrote in a July blog post. The tech giant has been building "more secure" alternatives to log in for years -- and has since May been offering them by default to new users. Many other online services -- such as artificial intelligence giant OpenAI's ChatGPT chatbot -- require steps like entering a numerical code emailed to a user's known address before granting access to potentially sensitive data. "Passwords are often weak and people re-use them" across different online services, said Benoit Grunemwald, a cybersecurity expert with Eset. Sophisticated attackers can crack a word of eight characters or fewer within minutes or even seconds, he pointed out. And passwords are often the prize booty in data leaks from online platforms, in cases where "they are improperly stored by the people supposed to protect them and keep them safe", Grunemwald said. One massive database of around 16 billion login credentials amassed from hacked files was discovered in June by researchers from media outlet Cybernews. The pressure on passwords has tech giants rushing to find safter alternatives. - Tricky switchover - One group, the Fast Identity Online Alliance (FIDO) brings together heavyweights including Google, Microsoft, Apple, Amazon and TikTok. The companies have been working on creating and popularising password-free login methods, especially promoting the use of so-called access keys. These use a separate device like a smartphone to authorise logins, relying on a pin code or biometric input such as a fingerprint reader or face recognition instead of a password. Troy Hunt, whose website Have I Been Pwned allows people to check whether their login details have been leaked online, says the new systems have big advantages. "With passkeys, you cannot accidentally give your passkey to a phishing site" -- a page that mimics the appearance of a provider such as an employer or bank to dupe people into entering their login details -- he said. But the Australian cybersecurity expert recalled that the last rites have been read for passwords many times before. "Ten years ago we had the same question... the reality is that we have more passwords now than we ever did before," Hunt said. Although many large platforms are stepping up login security, large numbers of sites still use simple usernames and passwords as credentials. The transition to an unfamiliar system can also be confusing for users. Passkeys have to be set up on a device before they can be used to log in. Restoring them if a PIN code is forgotten or trusted smartphone lost or stolen is also more complicated than a familiar password reset procedure. "The thing that passwords have going for them, and the reason that we still have them, is that everybody knows how to use them," Hunt said. Ultimately the human factor will remain at the heart of computer security, Eset's Grunemwald said.