Latest news with #digitalassistants


Sustainability Times
18-07-2025
- Sustainability Times
'They Blamed the Students—But It Was Us': Professors Caught Using ChatGPT as Secret Weapon While Cracking Down on Classroom Cheating
IN A NUTSHELL 📚 Teachers are increasingly relying on digital assistants to handle educational tasks, reshaping traditional teaching methods. are increasingly relying on digital assistants to handle educational tasks, reshaping traditional teaching methods. 🤖 The use of AI by educators often goes unmentioned, leading to student concerns over transparency and trust in the classroom. by educators often goes unmentioned, leading to student concerns over transparency and trust in the classroom. ⚖️ Universities are crafting ethical frameworks to manage AI's role in education, promoting disclosure and human oversight. to manage AI's role in education, promoting disclosure and human oversight. 🔍 Students are becoming adept at identifying AI-generated content, highlighting the need for honest communication about its use. In recent years, the educational landscape has undergone a profound transformation as teachers increasingly rely on digital assistants to aid in their duties. This silent shift is reshaping the very essence of knowledge transmission. What was once a straightforward exchange of wisdom between teacher and student is now mediated by artificial intelligence (AI), raising questions about transparency and trust. While the integration of AI into education may seem a natural progression in a tech-driven world, it becomes contentious when its use remains concealed from students, challenging the fundamental trust that underpins educational relationships. The Silent Automation of Teaching Practices The use of artificial intelligence in education is not solely a tool for students; teachers, too, are increasingly harnessing its capabilities to streamline their workloads. From creating instructional materials to crafting quizzes and providing personalized feedback, AI's presence is growing in the classroom. Notably, David Malan at Harvard has developed a chatbot to assist in his computer science course, while Katy Pearce at the University of Washington uses AI trained on her evaluation criteria to help students progress even in her absence. Despite these advancements, some educators choose to keep their use of AI under wraps. Overwhelmed by grading and time constraints, they delegate certain tasks to AI without disclosure. Rick Arrowood, a professor at Northeastern University, admitted to using generative tools for creating his materials without thoroughly reviewing them or informing his students. Reflecting on this, he expressed regret over his lack of transparency, wishing he had better managed the practice. 'These Kids Read in 6 Months': This Elementary Teacher's Shocking Method Defies 30 Years of Reading Education Norms AI Use in Education Sparks Student Tensions The non-transparent use of AI by educators has led to growing unease among students. Many notice the impersonal style and repetitive vocabulary of AI-generated content, prompting them to become adept at identifying artificial texts. This has led to instances like that of Ella Stapleton, a Northeastern student who discovered a direct ChatGPT request within her course materials. She filed a complaint and demanded a refund of her tuition fees. On platforms like Rate My Professors, criticism of standardized and ill-suited content is mounting, with students perceiving such materials as incompatible with quality education. This sense of betrayal is heightened when students are prohibited from using the same tools. For many, teachers' reliance on AI signifies injustice and hypocrisy, fueling further discontent. 'This Should Never Have Happened': Scientists Horrified as World's First Octopus Farm Sparks Ethical and Ecological Uproar Ethical Frameworks for AI Use in Education In response to these tensions, several universities are establishing regulatory frameworks to govern AI's role in education. The University of Berkeley, for instance, mandates explicit disclosure of AI-generated content, coupled with human verification. French institutions are following suit, acknowledging that a complete ban is no longer feasible. An investigation by Tyton Partners, cited by the New York Times, found that nearly one in three professors regularly uses AI, yet few disclose this to their students. This disparity fuels conflict, as emphasized by Paul Shovlin from Ohio University. He argues that the tool itself is not the issue, but rather how it is integrated. Teachers still play a crucial role as human interlocutors capable of interpretation, evaluation, and dialogue. 'China Prepares for War in Space': HQ-29 Missile System Can Destroy Satellites and Ballistic Threats Mid-Air Some educators are choosing to embrace transparency by explaining and regulating their AI use, using it to enhance interactions. Though still a minority, this approach could pave the way for reconciling pedagogical innovation with restored trust. As we navigate this evolving educational landscape, the balance between technology and transparency remains a pressing concern. How can educators and institutions work together to ensure that the integration of AI enhances rather than hinders the educational experience? This article is based on verified sources and supported by editorial technologies. Did you like it? 4.5/5 (30)


Phone Arena
17-07-2025
- Phone Arena
Siri is holding Apple back. A new AI smart assistant is the only way forward
Immediately, soothing tunes start playing, and after a short five-second wait, your iPhone gently utters,You immediately get a booking confirmation email for a gorgeous Airbnb by the lake reserved three weeks from now at an excellent price. Life suddenly feels good. You pat yourself on the back for getting a phone with such a smart assistant. "Done. Do you want me to pack your bag as well?" Suddenly, you wake up. Your phone's screen reads, " Sorry, I didn't quite catch that. Did you mean 'Play Led Zeppelin'?" I think that the onlyway Apple could save Siri is by actually killing it off. It's been, what, more than thirteen years since iPhones have had access to Siri, and it has never felt like Apple has the upper hand in digital assistants. Google has been dominating the field since the early days of Google Now, through the golden age of the Google Assistant, and is now pretty much impossible to catch due to the next-gen Gemini assistant, which combines the forte of a smart assistant with the intelligence chops of an LLM agent. By now, the damage is done: Siri will probably always be known for its poor capabilities, inability to decipher natural-language queries, and overall ineptitude. It's pretty much impossible to bounce back from such a negative image that has been picking up all kinds of slack throughout the years. Apple Intelligence features eventually arrived on the iPhone, it's the glaring omission of the revamped Siri assistant that damaged Apple's prestige the most. The Apple Intelligence rollout disaster in iOS 18 has hurt Siri the most and is certainly the biggest nail in the coffin. We were promised a smarter and more capable Siri, which could potentially alleviate some negativity surrounding the assistant. While all advertisedfeatures eventually arrived on the iPhone, it's the glaring omission of the revamped Siri assistant that damaged Apple's prestige the most. That's probably the reason we basically heard nothing about Siri during the Apple WWDC'25 developer summit when iOS 26 , macOS 26, and iPadOS 26 were announced, and instead we got the proverbial sand in the eyes that was the Liquid Glass redesign Apple's next smart assistant needs a reputation makeover I am convinced that Apple needs to give Siri a major reputation makeover when it eventually fixes Siri, which might happen in 2026, 2027, or beyond. A name change should be the bare minimum here: no matter how much more capable Siri becomes, it will always be just that, "Siri." It's a well-known and established part of Apple's ecosystem, which is generally great for brand recognition, but in this particular case that's actually hurting Apple. For example, Google hasn't been afraid to "move fast and break things," making Gemini the spiritual successor to the Google Assistant, which automatically plays with the psychology of it all and subconsciously convinces you that Gemini is much more powerful and capable (which it objectively is). The only way to fix things is to let Siri die and emerge from the ashes reborn as a new smart assistant. What Apple's next smart assistant should be like The most recent scoop on the matter suggests Apple's next version of Siri will be powered by large-language models, which would help it become that much more conversational and interactive. However, it doesn't seem that Apple is aiming to develop an in-house LLM from the ground up but will instead collaborate with Anthropic or OpenAI. These two companies pull the strings behind Claude and ChatGPT, which are among the more capable and well-known models out there. A few months ago I kept hearing the rumors that Google itself could be viewed as a partner in revamping Siri, but Cupertino is probably weighing its options too carefully so that it won't give its rival too much power. With that in mind, I can imagine a future Apple smart assistant that is more proactive, conversational, and capable of executing complex multi-step app actions (which was already advertised as a future capability of the assistant anyway). As usual with Apple, the focus with this assistant will be the privacy and on-device processing of all data, but for the sake of sparing user resources, we might see some offloading to dedicated Apple servers that will take care of any heavier AI requests. And, of course, all of that goes along with a nice name change, a new interface following the Liquid Glass design language, and a promotional campaign to showcase the much better capabilities of the new assistant in comparison with the Siri of old. I don't know, maybe I'm asking too much. Maybe people are content with an assistant that's best at setting alarms and timers. Maybe nobody wants a futuristic smart assistant that knows you and understands you. One can dream.


Arab News
26-06-2025
- Business
- Arab News
Superintelligent AI is coming and Saudi Arabia is ready
When people hear the term 'artificial intelligence,' they typically think of chatbots and digital assistants. But what's coming next could significantly impact the digital economy in the Middle East and beyond. What we are referring to is superintelligent AI. And if global tech leaders are right, it could arrive in fewer than five years. But what does that involve? How is it different from today's AI? And what are the implications for a region focused on leading in technology and innovation? Most people know AI through generative tools like ChatGPT, Gemini, and DALL-E — systems that can write, code, and produce art. While powerful, these tools are best suited to narrow tasks and rely on patterns found in existing data. The new challenge is to create artificial general intelligence — AI that thinks and acts like a human across a wide range of tasks. In short, AGI could learn new subjects, solve unfamiliar problems creatively, and adapt its behavior much like a human mind. Artificial superintelligence, or ASI, would go even further. It would outperform the most intelligent humans in virtually every domain, from science and economics to emotional intelligence. Not just faster or smarter, but capable of things humans can't yet do. The foundations are already in place: faster computers, improved neural systems, and reasoning systems with numerous agents. The Middle East is increasingly gearing up for the change — with Saudi Arabia at the forefront. In the Kingdom, the focus has shifted from simply using AI to developing and managing homegrown AI systems. Earlier this month, Saudi Arabia launched Humain, a new initiative backed by the Public Investment Fund. The project has ambitious goals: to build robust AI infrastructure, develop local cloud solutions, and create a powerful multimodal language model in Arabic. Because superintelligence will require adapting to local contexts, respecting cultural values, and maintaining control over data and systems, Saudi Arabia aims not only to use AI, but to shape it as a platform for future generations. Humain will be powered by more than 18,000 Blackwell GPUs from Nvidia. AMD and Microsoft will help fund research on AI training systems and chip architecture, while Amazon Web Services plans to invest $5 billion to build an AI Zone in the Kingdom. These partnerships are more than transactions — they are building blocks for long-term technological strength. As the world prepares for the emergence of superintelligence, we'll need more computing power, deeper government coordination, and stronger cross-border collaboration. Saudi Arabia is making its move now, ahead of the curve. With superintelligent systems, we could see autonomous legal platforms, AI-designed cities, and travel driven by emotional experiences. Yousef Khalili But what will superintelligent AI mean for the broader Middle East economy? It could accelerate four major transformations, starting with more intelligent governance and rapid infrastructure development. Such systems could analyze countless policies in real time and improve sectors such as traffic management, public health, and economic planning. This kind of capability could help Saudi Arabia achieve its Vision 2030 goals more quickly and accurately. Superintelligent AI will also unlock personalized learning. Imagine AI tutors that adapt to each student's learning style, cultural context, and emotional state. With superintelligence, it's possible to deliver large-scale, individualized education, therefore building a generation of skilled experts across fields. The scientific potential is even greater. In areas like medicine, clean energy, and materials science, AI could enable breakthroughs, whether in drug discovery, hydrogen technologies, or advanced materials. These applications align closely with Saudi Arabia's growing investments in biotechnology and sustainable energy. New industries will also emerge. With superintelligent systems, we could see autonomous legal platforms, AI-designed cities, and travel driven by emotional experiences. NEOM may serve as a testing ground for many of these innovations. Regional leadership in AI governance must also grow. The future is not guaranteed to be positive. Superintelligence is unlike any tool humanity has ever created. Without clear rules and alignment, it could harm economies, displace jobs, or deepen inequality. This is why governance, alignment, and ethics must evolve in parallel with technological progress. The region is well placed to lead not only in adoption but in shaping the frameworks around it. As Saudi minister Abdullah Al-Swaha recently said: 'Instead of only following standards, we should help create them.' In the end, readiness provides the edge. Superintelligent AI is approaching quickly. The nations that invest early, think boldly, and manage wisely will have a real opportunity to leap ahead in this century. Saudi Arabia is demonstrating what it means to think ahead. From building sovereign AI systems to securing large-scale infrastructure deals, it is laying the foundation for a future where prosperity is driven not by oil or labor, but by intelligence. If superintelligence emerges by 2028, the Middle East will not simply be a witness — it will be a leader. • Yousef Khalili is the global chief transformation officer and CEO for the Middle East and Africa at Quant, a company developing advanced digital employee technology aimed at redefining the future of customer experience.


Android Authority
18-06-2025
- Android Authority
Android may soon recommend you use an 'optimized' voice assistant — here's what that really means
Mishaal Rahman / Android Authority TL;DR OEMs can now highlight specific digital assistants as 'optimized' for their Android devices in the default app settings. Google says it created the feature at an OEM's request so OEMs can recommend assistants verified to work with their first-party applications. While Google says it won't do this on its own Pixel phones, the feature comes as the company is facing growing regulatory pressure. For many years, Google Assistant was widely seen as the best personal assistant app for Android. While Bixby and Alexa were better suited for users heavily invested in the Samsung and Amazon ecosystems, Google Assistant was generally the better performer for most tasks on Android. In the era of LLM chatbots, however, the Android voice assistant space is more competitive than ever. With so many capable assistants to choose from, it can be tough for users to find the right one. To help with this decision, Android OEMs can now choose to highlight which assistants are 'optimized' for their devices — a feature that notably comes as Google faces mounting regulatory pressure over its deals that make its own services the default on Android. You're reading an Authority Insights story. Discover Authority Insights for more exclusive reports, app teardowns, leaks, and in-depth tech coverage you won't find anywhere else. Why OEMs want to highlight certain voice assistants, and how they'll do so For a long time, Android has let you change the default voice assistant — the service that launches when you hold the power button or swipe up from the corner. You can do this on most devices by going to the 'default digital assistant app' page under Settings > Apps > Default apps. The operating system doesn't favor any particular service, listing all eligible voice assistants alphabetically. As a result, not even Google's own assistant is placed at the top of the list. Mishaal Rahman / Android Authority The current 'default digital assistant app' page in Android A few months back, though, I was digging through the then-latest beta of Android 16 when I found evidence of a new 'recommended assistants' feature. The feature allows OEMs to define a list of voice assistants that will appear in a separate 'Optimized for device' category at the top of the 'default digital assistant app' page. All other installed voice assistants are then displayed below in an 'Others' category on the same page. Because no description accompanies the 'Optimized for device' header, and the feature's source code doesn't explain its purpose either, I wasn't sure what made a voice assistant 'optimized' for a particular device. Furthermore, the feature isn't active on any Android 16 devices we checked, leaving us without any real-world examples of what OEMs might consider an 'optimized' assistant. However, given the broader context, we thought that maybe Google or one of its OEM partners wanted to label their own voice assistant as 'optimized.' This would subtly discourage users from picking other options, as promoting an assistant as 'optimized' for a device implies that all others are sub-optimal, which isn't necessarily true. To find out if Google planned to recommend its own Gemini assistant this way, we reached out to the company with our findings. A Google spokesperson told us that it created the feature at the request of an OEM partner. Apparently, an OEM wanted the ability to highlight digital assistants it had verified to work with its first-party apps. OEMs would need Google's help to develop this feature because the underlying component, Permission Controller, is a Project Mainline module that Google controls. Here's the full statement shared by a Google spokesperson in response to our inquiry: 'This optional UI allows OEMs to provide a list of assistants that have been verified to work with their applications and is controlled and configured by the OEM, including what assistants they choose to recommend, the total number of recommended assistants and whether they want to use it at all.' – Google spokesperson The spokesperson also shared a mockup of the 'default digital assistant app' page that shows what it could look like if the 'recommended assistants' feature is not only enabled but customized by the OEM. You can see this mockup in the image on the left below. On the right is my own screenshot of the page with the default text; to take this screenshot, I created a fake voice assistant app and manually elevated it as the 'recommended' service with a bit of tinkering. Google's mockup of the 'default digital assistant app' page that shows how OEMs can configure which services to recommend. My image of the 'default digital assistant app' page, using a fake assistant service I manually set as the 'recommended assistant', on a Pixel phone running Android 16 Google's explanation, and the image it shared, aligns with other recent developments in Android. In my review of Android 16, I noted that several OEMs are using the new App Functions API to integrate Gemini with their own calendar, clock, and notes apps. Because of these new integrations, one or more of these OEMs — such as Xiaomi — could use the 'recommended assistants' feature to label Gemini as 'optimized,' signaling to users that it works best with the phone's native applications. Mishaal Rahman / Android Authority Gemini Apps for the Xiaomi Calendar and Xiaomi Notes apps made possible by the App Functions API Will Google be recommending its Gemini assistant on its own Pixel devices? While Google developed this feature based on OEM feedback, nothing precludes the company from leveraging it for its own devices. However, our Google spokesperson told us the company doesn't plan to deploy the feature on its own Pixel phones. We don't know where else this UI will show up, but we can safely say it won't make an appearance on Pixel devices at the very least. We're glad Google isn't forcing Gemini to be labeled as the 'optimized' assistant on all Android devices, let alone its own Pixel phones. Though it's well within the company's power to do so, we feel that would be an incredibly shortsighted decision. Google is already under intense scrutiny for its default assistant deals, and pushing Gemini over competitors like Perplexity would have only added fuel to the fire. While forcing such a change would surely be tempting — especially as companies like Motorola and Samsung ink deals to preload Perplexity — it would only exacerbate Google's legal troubles. Leaving the decision in the hands of OEMs is a wise move. Got a tip? Talk to us! Email our staff at Email our staff at news@ . You can stay anonymous or get credit for the info, it's your choice.


Zawya
10-06-2025
- Zawya
What's the significance of AI agents?
Last week, my article revolved around the Artificial Intelligence (AI) agents, being digital assistants and/or helpers that go beyond just responding to user inputs and prompts where they actually act proactively and not reactively to user/your requests. This week, I want to elaborate on what significance do these AI agents have on us? Truth be told, as a senior professional, one can be overwhelmed and loaded too, not just with information as an overload (from news outlets, social media, work, family, etc) yet with personal and professional actions and commitment too (from checking various emails, meetings, projects, and a huge to-do-list that continues to grow periodically) I know some might say that there are productivity tools and apps that might help and be a solution to help manage time and tasks, yet the proof is in the pudding on the number of hours that may be needed which is not enough to be honest and practical. A probable solution is therefore my last article on utilizing AI agents, for they would (as soon as they mature) assist you 24/7 (365 days) on virtually all your tasks, even while you're asleep or is on vacation. What's the significance of these AI agents though? In a nutshell, having someone smart and knowledgeable available next to you, round the clock, to assist you. What more? Quicker, cheaper (in price and not in quality), multitasker, and are very responsive. This may not be practical to have as a human being, let aside the bomb (in terms of cost) you would need to pay if such a service is available, for you may require more-than one person or resource to get such activities done (just incase an organization tries to mimic such a need today). Humans have emotions, humans need time to rest and humans cannot normally handle more-than one task at a time with superior quality. Yet smart machines and robots can (only with limitations as it stands today) AI agents save lots of time. Think of all the repetitive tasks it can do for you every day, from replying to emails, reviewing and summarizing your readings/reports, organizing your digital calendar and many more. All these time-consuming tasks can be freed up for you to focus on other activities that only you can do best (or don't necessarily need assistance with). AI agents learn and can adapt quite fast, thanks to the large language models (LLMs) which I will try and write about in the coming articles. But briefly, LLMs get smart every day by learning from a context, applying some logic and/or sometimes also get creative. As a pratical example, see how generative AI apps like ChatGPT and Gemini works (they all depend on LLMs). Lastly, AI agents biggest significance, as I had mentioned earlier, is that they work while you are asleep. They don't take break, don't ask for leave, and certainly are not moody (as they don't have emotions as humans do). This is a big bonus of working with AI agents. To conclude, AI agents would be able to think, plan and act on your behalf. What you just need to do is to give them a goal (such as 'Find me a restaurant here in Muscat that serves Mandi Rice, and is at a place near the beach, and book for me at 12 p.m. next Saturday the 14th June'). The AI agent will find out the steps, look for the information, and get you the results so as perform the necessary booking for you. AI agents will surely redefine what's possible by making life management much easier and faster; something I personally need to keep abreast. Until we catch up again next week, stay positive and tuned. 2022 © All right reserved for Oman Establishment for Press, Publication and Advertising (OEPPA) Provided by SyndiGate Media Inc. (