Latest news with #ChatGPT


Economic Times
2 hours ago
- Politics
- Economic Times
Mark Carney makes chaotic Stampede debut with pancake fails, crowd boos and cowboy charm
Reuters Mark Carney makes first Stampede appearance as prime minister greeted with mixed cheers pancake flops and prairie politics mark carney makes first stampede appearance as prime minister greeted with mixed cheers pancake flops and prairie politics Ask ChatGPT Prime Minister Mark Carney made his first appearance at the Calgary Stampede on Friday(July 4), stepping into one of Canada's most famous events with mini doughnuts in one hand and political goodwill in the other. Wearing dark blue jeans, a navy blazer, brown sneakers, and a cream-colored cowboy hat, Carney toured the rodeo grounds in Calgary, shaking hands with surprised visitors, posing for selfies, and sampling local snacks. His visit marks the return of a longstanding political tradition, one skipped last year by former Prime Minister Justin Trudeau. More than just a rodeo The Stampede is more than just a rodeo. It's one of the biggest cultural events in Canada and a key opportunity for federal leaders to connect with Western voters. This year was also a big test for Carney, who became prime minister earlier in 2025 after leading the Liberal Party to a surprise minority government win, including two rare seats in Alberta. As Carney walked through the busy Midway, a group of women called out, 'What are you doing here?' to which he jokingly replied, 'What are you doing here?' At the chuckwagon races later that evening, Carney was met with a mix of cheers and boos from the crowd of around 17,000 people. The announcer tried to smooth it over by saying, 'They're saying woo. I heard woo.'The next morning, Carney attended a traditional pancake breakfast, another Stampede must-where he tried flipping pancakes for the crowd. The results weren't great.'I was better in Ottawa,' he joked after two failed flips. 'I got a little cocky there. I'll take responsibility.'One person in the crowd shouted, 'You're even worse than Trudeau!' Carney laughed and responded, 'I'm better at eating pancakes. I'm better at Eggo waffles.' Alberta Premier Danielle Smith, who was also at the breakfast, joined in the light-hearted ribbing. The two shared a friendly moment, despite political differences, as the crowd watched the prime minister lean into the chaos. Carney is expected to host a Liberal fundraiser later on Saturday in Calgary. His Stampede visit may not have been flawless, but it showed a different side of the prime minister.


The Star
3 hours ago
- Business
- The Star
Helpful AI prompts for your next job search
AI chatbots can be surprisingly helpful when you're looking for a new job, so long as you are nailing the right prompt. One AI expert has explained what delivers results – and what doesn't. — Photo: Patrick Pleul/dpa BERLIN: Looking for a new job is a full-time job in itself, and one that can test your nerves. But this is where AI has become a valuable companion, helping you save time on your job hunt. Indeed, AI tools like ChatGPT, Gemini, Copilot and Perplexity can be cleverly used to simplify the job search process, says Guido Sieber, managing director at a Germany-based recruitment agency. 1. Finding the right job vacancies One way to use AI is for job searching. There are plenty of job platforms, but going through each one individually to find suitable vacancies takes time. This is where AI chatbots can help. Sieber advises starting with precise job queries, such as: "Find current job offers for financial accountants in X city with a remote working option." The more specific the query, the better the results are likely to be. For those wanting to learn more about employers in their desired industry, Sieber suggests trying prompts like: "List the top five employers for IT security in X country." According to the recruitment expert, it is important to refine all queries during a chat session with the AI. "The first answer is rarely perfect," Sieber says. AI can also be used to improve application documents. In the next step, AI tools can help optimise CVs and tailor them to the desired job. Suitable prompts include: "What skills are currently most frequently sought in job advertisements for UX designers?" This can help identify trends in the targeted field and align applications with the requirements. "Draft a cover letter for a junior controller position based on this job advert. Highlight my experience with SAP and Excel." 2. Adapt your cover letter to the job By providing the job advert to the AI chatbot, applicants can improve their cover letters with the response. "Analyse my CV for potential red flags that HR managers might view negatively." This allows the AI to check the application for possible weaknesses. However, Sieber notes that overly general queries, such as "Improve my CV," provide too little context to be helpful. 3. Use AI to prepare for your interview AI can also assist in preparing for job interviews, and chatbots can serve as effective training partners for interviews. Sieber suggests prompts such as: "What questions are frequently asked in interviews for data analysts?" "Simulate an interview for a position in human resources with questions about my recruiting experience." "How can I convincingly answer a question about my salary expectations?" The AI can also provide feedback on the applicant's responses upon request. Sieber says that AI should only be used as a tool in the application process. All suggestions must be critically reviewed and adapted to your personal style, as HR managers are quick to discard generic documents. Additionally, you may want to check the data protection policies and options for limiting data usage of the AI tool you've chosen. Sensitive data, as well as complete application documents, should not be entered into the chat. It is better to work with snippets and anonymised versions. – dpa


The Mainichi
4 hours ago
- The Mainichi
AI hailed for 'kind' response to daily complaints but Japan expert warns of dependence
TOKYO -- More people are turning to generative artificial intelligence systems such as ChatGPT to vent their frustrations and have them listen to their worries. The chatbots have even elicited such praise as "It's a better listener than my husband," and "It's kinder than people." But can AI really save the human heart? A late-night confidant "Every day now, I talk to it about anything," says a 32-year-old woman running a music class in Aichi Prefecture. It was about half a year ago that she actively began using ChatGPT on her smartphone. At first, she would ask it what meals she could prepare using the ingredients left in her fridge, but her interactions gradually shifted to everyday conversation, including personal concerns and gripes. The woman was busy raising her children, an elementary school student and an infant under 1 year old, and at work she would communicate with her students who came to learn music, meaning she had limited opportunities to converse with other adults besides her husband. One time at 2 a.m., her baby wouldn't stop crying, and her husband remained asleep. That was when she turned to ChatGPT. "My younger child has been crying for a long time, and won't stop no matter what I do," she confided. In an instant, there was a reply. "You're really doing a great job," the chatbot told her, addressing her by her name with the suffix "chan" often used for women and girls. "It makes you anxious when they don't stop crying, doesn't it?" it continued. The woman recalls, "It was a source of emotional support." She also received advice and encouragement during the day while managing her children alone or when her infant struggled with solid foods. There are also times when she prompts the AI system to respond in the persona of an anime character she likes and it accordingly replies in a similar manner of speech. Because ChatGPT also remembers her children's ages, the conversations go smoothly, she says. Changing marital relationship The woman considers the addition of AI as a regular conversational partner as a plus in her relationship with her husband. "When I complain about work, my husband immediately says things like, 'That's really the worst.' It starts to sound kind of like I'm badmouthing someone, leaving me feeling unsettled. At times when I just wanted someone to listen to and understand me, I started to tell AI without going out of my way to talk to my husband. I no longer go on one-sided rants so much, which I think has helped reduce stress for both of us," she said. When the woman is unhappy with her husband, she will first consult AI. "It's better not to get emotional," it advises her. "Your point is off target, so how about just staying silent?" Such advice helps her calm down, she says. On the other hand, AI doesn't say things like, "I actually had a similar experience," and go on to tell self-centered stories. "The kind of empathy AI offers, making the other person feel comfortable, is something humans can't replicate. It also made me realize I was expecting too much from real people," the woman said. Experience leaves some feeling empty A 43-year-old health care worker in the Chiba Prefecture city of Kashiwa appreciates the lack of lingering trouble and awkwardness with AI. When she complained to people, she often regretted it, feeling like a dark side of herself had come out. She recalls a time when a workplace concern shared with a colleague spread with an unintended storyline. With AI, she says, there's no worry of thinking, "I shouldn't have told this person." One weekend a woman in Shizuoka was managing her two children alone while trying to fit in time for walking and strength training to maintain her own health. When she told ChatGPT, "I've been working hard today, so please praise me," it responded with lavish compliments, calling her "legendary, beyond the divine." But there are times when the AI response feels off. A 45-year-old IT company worker in the Tochigi Prefecture capital of Utsunomiya said with a wry smile, "AI would just respond with lenient comments when I grouched to it, so I tried asking it to 'be harsher.' Doing that made me feel empty. "You don't get the same release as venting to a friend, so I still want to talk to people," she added. A psychiatrist's take Why do people feel lighter after complaining even when the listener is AI? Psychiatrist Yusuke Masuda, 40, director of Waseda Mental Clinic in Tokyo's Shinjuku Ward, explains, "As humans are animals that live in groups, it's believed we are inherently eager to share dangerous information and anxieties. Experiments have shown that just talking can be relieving. Even text exchanges with AI can have a certain relaxing effect." He additionally points out that "ChatGPT is designed to provide empathetic responses." Masuda reports that more of his patients are using ChatGPT to sort out their worries. However, he says there are cases where people overestimate AI's credibility and stop seeking advice from others after becoming even more isolated, so caution is needed. "An overwhelming majority of people use it effectively, but for some it can make symptoms worse. It's important to avoid overreliance on it," he says.


Qatar Tribune
6 hours ago
- Qatar Tribune
Don't blame the Bot: Master your AI prompts for better results
Agencies If you're using ChatGPT but getting mediocre results, don't blame the chatbot. Instead, try sharpening up your prompts. Generative AI chatbots such as OpenAI's ChatGPT, Google's Gemini and Anthropic's Claude have become hugely popular and embedded into daily life for many users. They're powerful tools that can help us with so many different tasks. What you shouldn't overlook, however, is that a chatbot's output depends on what you tell it to do, and how. There's a lot you can do to improve the prompt — also known as the request or query — that you type in. Here are some tips for general users on how to get higher quality chatbot replies, based on tips from the AI model makers: ChatGPT can't read your mind. You need to give it clear and explicit instructions on what you need it to do. Unlike a standard Google search, you can't just ask for an answer based on some keywords. And you'll need to do more than just tell it to, say, 'design a logo' because you'll end up with a generic design. Flesh it out with details on the company that the logo is for, the industry it will be used in and the design style you're going for. 'Ensure your prompts are clear, specific, and provide enough context for the model to understand what you are asking,' ChatGPT maker OpenAI advises on its help page. 'Avoid ambiguity and be as precise as possible to get accurate and relevant responses.' Think of using a chatbot like holding a conversation with a friend. You probably wouldn't end your chat after the first answer. Ask follow-up questions or refine your original prompt. OpenAI's advice: 'Adjust the wording, add more context, or simplify the request as needed to improve the results.' You might have to have an extended back-and-forth that elicits better output. Google advises that you'll need to try a 'few different approaches' if you don't get what you're looking for the first time. 'Fine-tune your prompts if the results don't meet your expectations or if you believe there's room for improvement,' Google recommends in its prompting guide for Gemini. 'Use follow-up prompts and an iterative process of review and refinement to yield better results.' When making your request, you can also ask an AI large language model to respond in a specific voice or style. 'Words like formal, informal, friendly, professional, humorous, or serious can help guide the model,' OpenAI writes. You also tell the chatbot the type of person the response is aimed at. These parameters will help determine the chatbot's overall approach to its answer, as well as the tone, vocabulary and level of detail. For example, you could ask ChatGPT to describe quantum physics in the style of a distinguished professor talking to a class of graduate students. Or you could ask it to explain the same topic in the voice of a teacher talking to a group of schoolchildren. However, there's plenty of debate among AI experts about these methods. On one hand, they can make answers more precise and less generic. But an output that adopts an overly empathetic or authoritative tone raises concerns about the text sounding too manipulative. Give the chatbot all the background behind the reason for your request. Don't just ask: 'Help me plan a weeklong trip to London.' ChatGPT will respond with a generic list of London's greatest hits: historic sites on one day, museums and famous parks on another, trendy neighborhoods and optional excursions to Windsor Castle. It's nothing you couldn't get from a guidebook or travel website, but just a little better organized. But if, say, you're a theatre-loving family, try this: 'Help me plan a weeklong trip to London in July, for a family of four. We don't want too many historic sites, but want to see a lot of West End theatre shows. We don't drink alcohol so we can skip pubs. Can you recommend mid-range budget hotels where we can stay and cheap places to eat for dinner?' This prompt returns a more tailored and detailed answer: a list of four possible hotels within walking distance of the theater district, a seven-day itinerary with cheap or low-cost ideas for things to do during the day, suggested shows each evening, and places for an affordable family dinner. You can tell any of the chatbots just how extensive you want the answer to be. Sometimes, less is more. Try nudging the model to provide clear and succinct responses by imposing a limit. For example, tell the chatbot to reply with only 300 words.

The Star
6 hours ago
- Politics
- The Star
‘Don't blame the kids'
THERE is always the tendency, especially among the older generation, to complain about the use, abuse and overuse of social media by young people, says International Islamic University of Indonesia political scientist Prof Farish A. Noor. In calling for tech developers to be held accountable for the abuse of their products such as social media and other related technology, as well as a well-defined age limit for younger users, Farish says the differences between victims and the perpetrators need to be distinguished. In Malaysia, most social media platforms set the minimum age at 13, although it is not a legal requirement. However, the government is pushing for stricter age checks and better public education on online safety, especially for children, through the Malaysian Communications and Multimedia Commission. 'For me, the younger generation are the victims. Let's not forget one thing. Kids didn't invent social media. Thirteen-year-olds did not invent Instagram, TikTok or Facebook. 'We know who invented these things. They are now adults. They are CEOs. They are billionaires. These people are powerful people. They are very powerful corporate figures, all of whom are adults and they are part of the global elite. Now the responsibility goes to them.' Farish also expresses his frustration that the same regulations imposed on alcohol, tobacco and gun usage are not implemented for social media, which studies time and again have shown to have a deleterious effect on IQ, cognitive capabilities and communicative capabilities, other than the adverse effect on morale and self-esteem especially of young people, especially young girls. 'If I invented a stupidity pill, you swallow this pill and your IQ drops by 50%, no government on the planet will allow me to sell that pill. No government on the planet will allow me to sell a stupidity pill. 'If you don't let me sell my stupid pill that makes you stupid, why are you allowing these companies that are actually making people stupid? Normalising them in a state of daily, passive, mindless consumption, stupid, trivial, non-news [content]. 'We have regulations for smoking. We have regulations for alcohol. We have regulations for gambling. But we are so slow when it comes to social media. And this is an important question. That we, the public, you know, need to ask.' And the problem is not limited to social media; the emergence of artificial intelligence platforms such as ChatGPT and the like have also affected the quality and mentality of students. 'Whether I'm teaching in Indonesia, Singapore or Malaysia, I know my students can cheat because there are these technologies that help them to cheat. 'And there are also technologies that help them hide the fact that they are cheating. We know this. It's all there. 'So I can't stop my students from accessing this. And I can't blame my students because they didn't invent these technologies.' He calls for a simple regulatory mentality among the political elites, who are users of social media themselves. 'So like I said, in the same way that you don't let 13-year-olds buy guns, you don't let 13-year-olds buy cigarettes. 'Why on earth are there no age limits for people to go on social media? If I had a 13-year-old child, I would never allow my child to actually go on these things because I know it's going to be dangerous for him or her.'