
June's Pixel Drop gives Gboard the power to create stickers without Pixel Studio
Stickers are created by typing a prompt and selecting an emotion.
The custom sticker function is available for Pixel 9 and newer devices.
Back in May, we found code in the Gboard beta that suggested Google could add the ability to generate stickers without having to open Pixel Studio. At the time, it seemed like the feature may have been in the early stages of development due to its lack of UI elements. But it appears it may have been closer to release than we realized, as the feature is arriving on Gboard now.
Google's June Pixel Drop is here, and along with it is a plethora of new features. Among those new features is Gboard's new ability to generate custom stickers of whatever you can imagine.
Previously, if you wanted to create a new sticker, you would have to go into Pixel Studio first. You could then access your creation by navigating to Gboard's dedicated sticker picker tab. Today's update eliminates the middleman from this equation. Now you can make custom stickers right in Gboard, without having to open Pixel Studio.
To generate a new sticker, all you have to do is type in a prompt like 'jelly avocado with black eyes,' pick an emotion, and hit send. You'll also be able to make stickers out of the photos in your library. Whatever photo you choose, the background will be automatically stripped away so the main object becomes the focus of the sticker.
This feature is coming to Gboard on Pixel 9 devices and later. There's no word on if the feature will eventually trickle down to earlier hardware.
Got a tip? Talk to us! Email our staff at
Email our staff at news@androidauthority.com . You can stay anonymous or get credit for the info, it's your choice.

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles

Wall Street Journal
23 minutes ago
- Wall Street Journal
Will Trump's Chip Tariffs Do What He Thinks They Will? - Tech News Briefing
President Trump has proposed sweeping tariffs on chips , while offering an exemption: companies that invest in manufacturing in the U.S. won't have to pay. It's an attempt to incentivize more chip production in the U.S., but WSJ Heard on the Street writer Asa Fitch notes that key chip-making companies already have invested in U.S. factories. Plus, Disney is in a difficult bind when it comes to AI. The WSJ's Jessica Toonkel takes us inside the company's thinking. Peter Champelli hosts. Full Transcript This transcript was prepared by a transcription service. This version may not be in its final form and may be updated. Peter Champelli: Welcome to Tech News Briefing. It's Tuesday, August 12th. I'm Peter Champelli for the Wall Street Journal. Like pretty much every company, Disney is trying to figure out how it can use generative AI, but it's facing pushback both from in and outside the company. We'll hear more about what Disney's been up to, including an attempt to make an AI double of Dwayne The Rock Johnson. Then, Trump is threatening huge tariffs to try and incentivize companies to make chips in the US and to get other companies to buy US-made chips. But our Heard on the Street writer thinks the result of those tariffs could be much different and potentially have the opposite effect. But first, in its efforts to navigate artificial intelligence, Disney is in a bind. It's been dancing with ways to incorporate generative AI, including an interactive Darth Vader chatbot that players could talk to in Fortnite. But some in the company worry about pushback from fans and potential legal complications. The Wall Street Journal's Belle Lin spoke with deputy media editor Jessica Toonkel about it. Belle Lin: Jessica, there's a great anecdote in your story about the star Dwayne Johnson and Disney's scrapped plan to use a deepfake of his face for the live-action version of the hit movie Moana. What exactly does this anecdote illustrate about the challenges that face Disney in Hollywood when it comes to AI? Jessica Toonkel: We love this story because you have The Rock who gave his permission actually for this to be done. The idea was Dwayne Johnson would not have to be at every shot. He wouldn't have to be there on set all the time. His cousin, who has his six-foot-three, 250-pound stature was going to be his fake double, basically, they were going to use his body and put Dwayne Johnson's face on it, had also given his permission. So they had all the permissions and everything and the technology to do this deepfake. Yet they could not get comfortable with all the questions around what could it mean if we use this tool? Belle Lin: Why is it so complex for Disney to be more bold in how it uses AI for its creative endeavors? Jessica Toonkel: This is such a fraught subject in Hollywood. It was not that long ago that actors and writers were on strike saying, "You cannot replace us with AI," and the actor's contract is coming up again. So you have the fear of upsetting talent. You have the fear of fans saying, "Hey, this isn't real." And there's the fear of who owns the copyright and who owns these characters that are created by AI. If we work with an AI company to do something, will we still own every piece of that? And Disney can't afford to let go of any of that. We spoke to the general counsel at Disney who was very clear like, we want to make sure Disney owns Darth Vader. Belle Lin: What are some early steps that Disney has taken in using AI inside its shows, movies or games? Jessica Toonkel: So we've seen bits and pieces. Disney has a joint venture with Epic Games, the owner of Fortnite, and they created an AI-generated Darth Vader in the game. And within minutes, the gamers figured out a way to get the generated AI Darth Vader to curse at them. They had to fix that. And they did fix it within 30 minutes, and they did feel like even with that happening, it was a success. Just the fact that Disney did that is a huge step from where they were even five years ago. Belle Lin: How would you describe the dynamic inside Disney when it comes to using AI, albeit it's certainly very complicated? Jessica Toonkel: Disney, they understand that this is something they need to do and they want to do it the right way. So they have their team, they've created an AI group, they've done all those things. It's just that this is a company that has been historically probably the most protective of its characters in IP than any company. Belle Lin: Where do you think that this all ends up? Where do you think that Disney comes to a conclusion on what the use of AI is amongst its creative endeavors? Jessica Toonkel: I don't think Disney has to be a first mover on being the first one to use some AI tool for a specific purpose. They're going to continue to be cautious, but they're going to continue to experiment with things because they realize that they have to at least know what the game is. And next year, Disney is going to be naming its new CEO, and it will be really interesting to see who they pick if that person has been involved in these discussions because it will tell us how Disney is thinking about how important AI is for its future. Peter Champelli: That was Wall Street Journal deputy media editor Jessica Toonkel. Coming up, we'll dive deep into Trump's proposed tariffs on chips from overseas and why the exemptions would probably spare the biggest players. That's after the break. Trump has proposed a nearly 100% tariff on chips and semiconductors being imported from overseas, but he's allowing an exemption. Companies that build or plan to build in the US won't have to pay. It's an attempt to incentivize US companies to buy chips from US makers and to get the companies that are making the chips to build factories in the US. But our Heard on the Street writer Asa Fitch says that these tariffs won't result in more US chip production. Asa, in your reporting, you mentioned that advanced chip making is a game only few can play. What are the companies most threatened by these tariffs and what would the immediate effects be? Asa Fitch: So the largest chip makers in the world are effectively Taiwan Semiconductor Manufacturing Company or TSMC and Samsung Electronics and Intel, the US-based chip maker. Those companies will be affected in different ways by these tariffs if they go into place. Now, Trump wants to put in place, he said, a 100% tariff on imports of semiconductors. It's a very complex supply chain. Things move around all over the place all the time. So it's hard to come to a very certain answer about what that impact will be. One thing it doesn't seem like these tariffs will do though is seriously incentivize chip production in the US, advanced chip production in the US, at least, in the way that Trump talked about them last week. Peter Champelli: So why isn't that the case? And could there be any unintentional consequences of the tariffs? Asa Fitch: The main reason why that's the case is that when Trump laid out this 100% chip tariff plan last week, he said that there would be exemptions for companies that invest a lot of money in the US. Now, all the large chip makers have already invested tons of money in the US so they've already passed that bar, and that means that they likely, based on the language that Trump used, will get exemptions. So there's no further incentive for these chip companies to build upon their existing manufacturing operations in US based on these tariffs because the tariffs are gone. So if anything, these companies are sort of more incentivized to import stuff tariff-free from other parts of the world where it's cheaper to produce chips than to make them here in the US. So there's a little bit of a mismatch of the stated intent of these tariffs and the actual fact of them, at least as they appear so far, to have been outlined. And granted, that is kind of vague. Peter Champelli: So the target of these tariffs are US companies buying foreign-made chips. But on the flip side, earlier this week, the news broke that Nvidia and Advanced Micro Devices are going to give the Trump administration a 15% cut of their AI chip sales to China. How does this news factor into Trump's goal with the tariffs? Asa Fitch: That is really hard to tell. You could infer that the 15% surcharge on these companies' revenues in China means the Trump administration is trying to make it more expensive to sell this stuff, obviously. And that means that that's going to affect demand in the typical sort of supply-demand way. If you have higher prices, people will buy less of that stuff in general. It's not entirely clear that's going to happen in this case because if China or Chinese companies or Chinese government sees these chips as essential to their broader AI strategy and the key to unlocking AI for China, there's no way they're not going to pay a higher price. So the impact could be pretty limited. There've been some analysts who estimated the impact on Nvidia of this additional fee, something around maybe $3 billion a year. That sounds like a lot of money, but Nvidia is projected to make $200 billion plus in its current fiscal year. So $3 billion is maybe not a ton for that particular company. These are two different things, obviously. The tariffs are meant to incentivize manufacture in the US. These charges on sale to China are meant to disrupt in some way or limit the sales of AI chips in China. But there's two sides of the coin, if you will, or two different kind of objectives within the same envelope of national security, protecting US interests, growing US industry, things like that. Peter Champelli: And with Trump's proposed tariffs on the importing of chips and semiconductors, what would the longer term effects of those be on companies and on consumers? Asa Fitch: It's hard to say right now. We don't know what exact shape the tariffs are going to take. One thing that's clear, obviously, is that when you raise prices of goods, they tend to trickle down to consumers and to businesses who are buying those goods. So that could be the impact, but the magnitude of that is impossible to gauge without knowing exactly what shape these will take. And we've talked about the exceptions. Some of these companies like Apple qualify for exceptions so that the chips inside those iPhones aren't going to be charged a tariff. So it all depends on effectively the implementation. It's really uncertain right now. Peter Champelli: That was Wall Street Journal Heard on the Street reporter, Asa Fitch. And that's it for Tech News Briefing. Today's show was produced by Julie Chang with deputy editor Chris Zinsli. I'm Peter Champelli for the Wall Street Journal. We'll be back later this morning with TNB Tech Minute. Thanks for listening.


Bloomberg
29 minutes ago
- Bloomberg
Ex-OpenAI Researcher Raises Fund to Back Emerging Talent
Leonis Capital has secured $25 million for a new fund to help uncover the next OpenAI. The fund is backed by institutional investors and a mix of researchers and executives from companies like Nvidia Corp, OpenAI and Anthropic, the San Francisco-based venture capital firm said in a statement. Leonis, founded in 2021, has fully deployed its first $10 million fund across AI-native startups including MaintainX and Motion.


Forbes
31 minutes ago
- Forbes
Therapists Leverage AI For Mental Health By Delving Into Virtual Digital Twins Of Their Clients And Patients Minds
In today's column, I examine an advanced use of generative AI and large language models (LLMs) that entails therapists and other mental health professionals making use of so-called digital twins that are reflective of their respective clients and patients. The deal is this. Via the use of personas in generative AI, a feature that nearly all LLMs inherently include, it is presumably conceivable that you could devise a persona that somewhat matches and reflects a client or patient that is undergoing therapy. This is considered a digital twin, or more specifically, a medical digital twin. Yes, perhaps unnervingly, it seems possible to construct an AI-based simulated version of a client or patient that a therapist could then use to gauge potential responses and reactions to a planned line of psychological analyses and therapeutics. Let's talk about it. This analysis of AI breakthroughs is part of my ongoing Forbes column coverage on the latest in AI, including identifying and explaining various impactful AI complexities (see the link here). AI And Mental Health Therapy As a quick background, I've been extensively covering and analyzing a myriad of facets regarding the advent of modern-era AI that produces mental health advice and performs AI-driven therapy. This rising use of AI has principally been spurred by the evolving advances and widespread adoption of generative AI. For a quick summary of some of my posted columns on this evolving topic, see the link here, which briefly recaps about forty of the over one hundred column postings that I've made on the subject. There is little doubt that this is a rapidly developing field and that there are tremendous upsides to be had, but at the same time, regrettably, hidden risks and outright gotchas come into these endeavors too. I frequently speak up about these pressing matters, including in an appearance last year on an episode of CBS's 60 Minutes, see the link here. Therapists And AI Usage Many therapists and mental health professionals are opting to integrate AI into their practices and overtly use the AI as a therapeutic adjunct for their clients and patients (see my coverage at the link here). Even those therapists and mental health professionals who don't go down the route of incorporating AI are bound to encounter clients and patients who are doing so. Those clients and patients will often walk in the door with preconceived beliefs about how their therapy should go or is going, spurred and prodded by what AI has told them. In this sense, one way or another, therapists and mental health professionals are going to ultimately be impacted by the growing use of generative AI and LLMs. Right now, there are already around 700 million weekly active users of ChatGPT. You might find it of notable interest that the top-ranked use by the public of contemporary generative AI and LLMs is to consult with the AI on mental health matters, see my coverage at the link here. If that kind of AI can do a proper job on this monumental task, then the world will be a lot better off. Many people cannot otherwise afford or gain access to human therapists, but access to generative AI is generally plentiful in comparison. It could be that such AI will greatly benefit the mental status of humankind. A dour counterargument is that such AI might undercut mental health, doing so on a massive population-level scale, see my discussion at the link here. Personas Are Coming To The Fore Let's shift gears and focus on the use of AI-based personas. I've repeatedly emphasized in my writing and talks about generative AI that one of the most underutilized and least known pieces of quite useful functionality is the capability of forming personas in the AI (see the link here). You can tell the AI to pretend to be a known person, such as a celebrity or historical figure, and the AI will attempt to do so. In the context of mental health, I showcased how telling AI to simulate Sigmund Freud can be a useful learning tool for mental health professionals, see the link here. As a mental health professional, you ought to give serious consideration to making use of personas for your own self-training and personal refinement. For example, you might craft a persona that will pretend to be a person with deep depression. You could then use this persona to hone your therapeutic prowess regarding depression in patients and clients. It can be quite useful. Plus, there is no danger since it is just AI. You can try out various avenues to gauge what works and doesn't work. No harm, no foul. For my suggestions on how to write prompts that suitably create or cast personas, see the link here. Digital Twins And Humans There is specialized parlance in the tech field that has been around for many years and refers to the concept and practice of using computers to simulate a real object or entity. The parlance is that you are crafting and making use of a digital twin. This became popular when machinery used on factory floors could be modeled digitally. Why would a digital model or simulation of a factory assembly machine be useful? Easy-peasy, there are lots of crucial benefits. One is that before you even construct the machine, you can try it out digitally. You can make sure that the machine will hopefully work suitably once it is constructed and put into operation. Another advantage is that you can readily make lengthy runs of the digital twin and predict when the real version might break down. This gives a heads-up to the maintenance crew working on the factory operations. They get estimates of the likely time at which the machine will potentially start to degrade. Recently, there has been a realization that digital twins can be used in other, more creative ways, such as modeling or simulating human beings. This is often referred to as a medical digital twin (note that other names and phrases are sometimes used too). Medical Digital Twins In a research article entitled 'Toward Mechanistic Medical Digital Twins' by Reinhard Laubenbacher, Fred Adler, Gary An, Filippo Castiglione, Stephen Eubank, Luis L. Fonseca, James Glazier, Tomas Helikar, Marti Jett-Tilton, Denise Kirschner, Paul Macklin6, Borna Mehrad, Beth Moore, Virginia Pasour, Ilya Shmulevich, Amber Smith, Isabel Voigt, Thomas E. Yankeelov, and Tjalf Ziemssen, Frontiers In Digital Health, March 7, 2024, these salient points were made (excerpts): Please note that as emphasized above, the advent of medical digital twins is still early on. There is plenty of controversy associated with the topic. One major qualm is that with a factory floor machine, you can pretty much model every physical and mechanical aspect, but the same can't be said about modeling human beings. At least not yet. Lucky or not, we seem to be more complex than everyday machines. Score a point for humankind. Personas As Digital Twins When you think about devising a medical digital twin, there are customarily two major elements involved: Some would insist that you cannot adequately model the mind without also modeling the body. It's that classic mind-body debate; see my analysis at the link here. If you dogmatically believe that a mind is unable to be sufficiently modeled without equally modeling the body, I guess that the rest of this discussion is going to give you heartburn. Sorry about that. We are going to make a brash assumption that you can use generative AI to aid in crafting a kind of model or simulation of a person's mind, at least to the extent that the AI will seek to exhibit similar personality characteristics and overall psychological characteristics of the person. So, in that sense, we are going to pursue a medical digital twin that only focuses on the second of the two major elements. Does that mean that the AI-based digital twin is missing a duality ingredient that wholly undercuts the effort? I'm going to say that it doesn't, but you are welcome to take the posture that it does. We can amicably agree to disagree. On a related facet, there are advocates of medical digital twins who would insist that a medical digital twin must encompass the bodily aspects, else it isn't a medical digital twin at all. In that case, I guess we might need to drop the word 'medical' from this type of digital twin. Just wanted to give you a heads-up on these controversies. Personas Of Your Clients Or Patients Moving on, let's further consider the avenue of creating a digital twin of your client or patient so that you can utilize the AI to ascertain your line of therapy and treatment. The first step involves collecting data about the person. The odds are that a therapist will already have obtained an extensive history associated with a client or patient. Those notes and other documents could be used to feed the AI. The idea is that you will provide that data to the generative AI, and it will pattern-match and craft a persona accordingly. You might also include transcripts of your sessions. Feeding this data into AI is often done via a technique known as retrieval-augmented generation (RAG), see my explanation at the link here. Please be very cautious in taking this type of action. Really, really, really cautious. Many therapists are already willy-nilly entering data about clients and patients into off-the-shelf publicly available LLMs. The problem is that there is almost no guarantee of data privacy with these AIs, and you could readily be violating confidentiality and HIPAA provisions. You might also need to certify consent from the client or patient, depending on various factors at play. For more, see my discussion at the link here and the link here. Make sure to consult with your attorney on these serious matters. One approach is to stridently anonymize the data so that the client or patient is unrecognizable via the data you have entered. It would be as though you are simply creating a generic persona from scratch. Whether that will pass a legal test is something your legal counsel can advise you on. Another approach is to set up a secure private version of an LLM, but that, too, can have legal wrinkles. More On Personas As Digital Twins Yet another approach is to merely and shallowly describe the persona based on your overall semblance of the person. This is somewhat similar to my earlier point that you can use personas by simply entering a prompt that the devised persona is supposed to represent a person with depression. That's a vague indication and would seem untethered to a specific person. The downside, of course, is that the surface-level persona might not be of much help to you. What are you going to do with whatever persona you craft? You could try to figure out the emotional triggers of the person, as represented via the persona. What kind of coping style do they have? How does their coping mechanism react to the therapy you have in mind? All sorts of therapy-oriented strategies and tactics can be explored and assessed. In essence, you are trying out different interventions on the persona, i.e., the digital twin. Maybe you are mulling over variations of CBT techniques and want to land on a particular approach. Perhaps you often use exposure therapy and are unsure of how that will go over with the client or patients. This provides a no-risk means of determining your therapy in a simulated environment and prepares you for sessions with the actual person. Don't Fall For The Persona I trust and hope that any therapist or mental health professional going the route of using a persona as a digital twin is going to keep their wits about themselves. Ordinary users of AI who use personas can readily go off the deep end and believe that the persona is real. Do not let that same fate befall you. The persona is merely the persona. Period, end of story. You cannot assume that the persona is giving you an accurate reading of the person. The AI could be completely afield in terms of how the person will actually respond and react. Expect that the AI will almost certainly overrepresent some traits, underrepresent other traits, and be convincing as it does so. Convincingness is the trick involved. Contemporary generative AI is so seemingly fluent that you are drawn into a mental trap of believability. Inside your head, you might hear this internal voice: 'It must be showing me the true inner psyche of my client or patient! The AI is working miracles at modeling the person. Wow, AI is utterly amazing.' You must resist the urge to become over-reliant on the digital twin. Over-reliance is a likely possibility. Here's how. You use the persona. After doing so, you later meet with the client or patient. Everything the AI indicated as to responses and reactions appears to mirror what the person says and does during the session. Awesome. You decide to keep using the persona. Over and over, you use the persona. Voila, you are hooked. The persona has led you down a primrose path. The seemingly uncanny portrayal has been spot-on. The problem is that when the client or patient diverges from the persona, you are going to have your mind turned backward. The person must be wrong, because the persona was always right. In other words, the person is supposed to be acting as the persona does. The world has gone topsy-turvy. But it's you, because you have forsaken your therapist mindset and allowed AI to capture and defeat your real-world acuity. That's bad news. Do not let that happen. Additional Twists And Turns There is a lot more to consider when using AI as a digital twin in a mental health context. I'll be covering more in a series of postings. Be on the watch. One quick point to get your mental juices flowing is this. Suppose that you have gotten written consent from the client or patient, and they know that you are using AI to depict a persona of them. The person comes to one of your later sessions and starts to suspect that you are proceeding as if it is based on what the AI told you. They worry that the AI is portraying them in some unpleasant fashion. Furthermore, they now insist that you let them access the persona. They want to see how it represents them. Mull that over and think about how you would contend with that potential nightmare scenario. It's a doozy. It could arise. A final thought for now. Albert Einstein famously made this remark: 'My mind is my laboratory.' Yes, that's abundantly true. In the case of mental health therapy, besides your mind being your laboratory, it turns out that AI can be your laboratory too. Proceed with aplomb.