Apple's new app for invitations helps you get your party started
Users with iPhones with at least iOS 18 can create customized invitations using a simple form and distribute them by message, email or link. A host can then use the feedback function to see who is planning to come.
The app is not restricted to iPhone users as you can send out invitations regardless of the device used, meaning also to Android users or by e-mail to people without a smartphone.
Your invite will then be displayed either in the invitations app or on a website.
Alongside your occasion and invitation text, people you send an invite to can also see the location and time and can create a calendar entry or plan a route directly using the map app.
There's even a small weather forecast for your event location which may help when it comes to choosing an outfit.
You need a subscription to iCloud+ for the full range of Apple Invites functions, which costs $0.99 a month for the basic option with 50 GB storage, $2.99 for 200 GB and $10.99 for 2 terabytes of storage.
Alongside the app, there is also a web version you can access at icloud.com/invites.
To create the invite, users can choose an image from their photo library or from the app's gallery of backgrounds — a curated collection of images representing different occasions and event themes.
Later, participants can contribute photos and videos to a dedicated shared album within each invite to help preserve memories and relive the event.
'With Apple Invites, an event comes to life from the moment the invitation is created, and users can share lasting memories even after they get together,' said Brent Chiu-Watson, Apple's senior director of Worldwide Product Marketing for Apps and iCloud.
'Apple Invites brings together capabilities our users already know and love across iPhone, iCloud, and Apple Music, making it easy to plan special events.'

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles
Yahoo
a few seconds ago
- Yahoo
Apple faces lawsuit over alleged theft of mobile wallet technology for Apple Pay
Apple is currently facing a lawsuit filed in the Northern District of Georgia, Atlanta Division, by Fintiv, which accuses the tech giant of illicitly acquiring mobile wallet technology used to develop its payment service, Apple Pay. Fintiv is has been providing patented digital solutions for merchant payments, cross-border transactions, and digital asset tokenisation, with more than 100 ecosystems deployed in over 35 countries. The legal proceedings, initiated by Kasowitz on behalf of Fintiv, claim that Apple was involved in a pattern of criminal activities, including wire fraud and misappropriation of trade secrets. These actions were allegedly part of a scheme to appropriate Fintiv's mobile wallet technology, which has been a factor in the success and revenue generation of Apple Pay. The lawsuit details that over a decade ago, Apple purportedly sought a business partnership with CorFire, Fintiv's predecessor, under the guise of licencing their mobile wallet technology. During 2011 and 2012, Apple is said to have attended meetings with CorFire, receiving confidential information under NDAs with the intention of forming a licencing agreement. Contrary to this, the lawsuit alleges that Apple used this information for its own benefit, subsequently launching Apple Pay in 2014 with features that Fintiv claims were derived from CorFire's technology. Furthermore, the complaint accuses Apple of creating an enterprise with banks and payment networks to utilise the contested technology in processing Apple Pay transactions, resulting in substantial annual earnings for the involved parties. The complaint alleged: "By modifying Apple Pay for use on four separate categories of its devices, Apple has repeated and compounded its theft by knowingly utilising Fintiv's stolen technology in the hundreds of millions of iPhones, iPads, Apple Watches and MacBooks it has sold worldwide." This legal challenge comes on the heels of a separate case that was dismissed last month, where Apple, Visa, and Mastercard were accused of engaging in anti-competitive payment practices. According to the lawsuit, Apple allegedly received a "cash bribe" from Visa and Mastercard in exchange for not competing with them in the payment industry, resulting in transaction costs. "Apple faces lawsuit over alleged theft of mobile wallet technology for Apple Pay " was originally created and published by Electronic Payments International, a GlobalData owned brand. The information on this site has been included in good faith for general informational purposes only. It is not intended to amount to advice on which you should rely, and we give no representation, warranty or guarantee, whether express or implied as to its accuracy or completeness. You must obtain professional or specialist advice before taking, or refraining from, any action on the basis of the content on our site. Error in retrieving data Sign in to access your portfolio Error in retrieving data Error in retrieving data Error in retrieving data Error in retrieving data


Tom's Guide
2 minutes ago
- Tom's Guide
Forget "I, Robot" — here's what your home robot will actually be like by 2035
Artificial Intelligence | Smart Glasses | Wearable TechSmartphones | iPhones | Robots | Cars | TVs In the movie 'I, Robot' — which, coincidentally, takes place in 2035 — humanoid robots are ubiquitous, assisting people with every facet of daily life, from walking their dogs to making them breakfast to tucking them in at night. That is, until a malevolent AI overrides their programming to try and take over the world. In China, humanoid robots are now playing soccer matches against each other, and in early August, a store selling nothing but robots — including one that looks like Albert Einstein — just opened. Over in the U.S., Elon Musk told Tesla investors that he thinks the company will produce a million Optimus robots by 2030. Are Isaac Asimov's predictions eerily prescient? Will things be more benign? Or will the future of home robotics be somewhere in between? A number of companies already have prototypes of humanoid robots that can walk, jump and more, so it's only a matter of time before they're in all our homes, right? Maybe one day. However, the near future of home robotics — even ten years from now — is likely to be a lot more practical than exciting. And, there are a number of hurdles they'll need to overcome to be as pervasive as in science fiction. So, what will home robots look like in 2035? And what will it take to get one in every house? 'I personally believe that in 10 years, [in-home robots are] probably not going to be humanoids and that they're probably going to be focused on helping caregivers,' Steve Cousins, the executive director of the Stanford Robotics Center, said. Previous to his stint at Stanford, Cousins was the founder and CEO of Relay Robotics, which developed an autonomous delivery robot, and CEO of Willow Garage, a robotics and research lab that lasted from 2006 to 2014. 'When you think about aging, there's a market that's growing fast and, you know, we need help,' Cousins said. 'In the U.S., at least the baby boomers have a lot of money, and so it's a real market.' By the year 2035, the U.S. population will reach a new milestone: For the first time ever, people over the age of 65 will outnumber people under the age of 18, according to the U.S. Census Bureau. Similar trends are already occurring in other developed nations, such as Japan and in Western Europe, which will put a tremendous strain on the healthcare system. But, this is where robots can step in. 'I personally believe that in 10 years, [in-home robots are] probably not going to be humanoids and that they're probably going to be focused on helping caregivers.' One company that already has robots in the field is ElliQ, a small desktop device that looks somewhat like an oversized coffee mug turned upside down. Designed to help elderly individuals with loneliness, ElliQ's 'head' lights up and turns in the direction of the person who's speaking — or who's speaking to it. The device, which costs $50 per month, has been out for a couple of years; New York State gave out 900 of them two years ago as part of a pilot program, which continues to this day. 'The surgeon general announced [loneliness] to be actually the largest epidemic in the US and the root cause of huge medical costs,' said Dor Skuler, CEO and Co-Founder of Intuition Robotics, which makes the ElliQ. 'The worst thing we [do to] prisoners is isolation. It's solitary confinement, and we end up actually sending our parents and the people we love more than anything into that state, and not because we want to, but that's just how life is very, very often, especially after the loss of the spouse.' Using AI, the ElliQ engages with a person throughout the day, asking them about themselves, reminding them to take medication, and keep them engaged. 'We had our tons of studies now that came out for the state of New York and medical journals and so on showing over 90% of people living with the ElliQ are seeing their loneliness reduced,' Skuler said. 'But we're also saying their health KPI is up, their hospitalization stays go down, the medication adherents go up.' 'When we started, we were just looking at pure companionship and loneliness. But you find out is once you're in that position of trust and high engagement, then you're in a position to influence a lot more than just this sense of loneliness.' Apart from its physical design, if a robot is going to be living inside your home, it has to be considered a part of the family. But how do you build up that acceptance? 'We knew that a device that moves autonomously around the house needed to be trustworthy and likeable,' Anthony Robson, Senior Manager of Product Development, Robotics for Amazon, said. Over the past few years, his team has been working on the Astro, Amazon's first home robot. The Astro is a small wheeled device with a tablet for a face, and a telescoping pole with a camera on the end that extends up about three feet. The Astro was unveiled in 2021, and is currently available as an invite-only product for $1,599. It's been getting generally positive reviews from those who own one. 'We worked hard to design Astro's personality and character traits with that in mind,' Robson said. 'What we didn't expect was how endearing customers would find it when Astro comes and hangs out with them, and how quickly they would think of it as another member of the family. It speaks to how robots in the home that are around us need to be good companions, respecting our space yet being helpful when needed, and being approachable while doing it.' Skuler thinks that home robotics companies are focusing on the wrong things. 'It's always interesting to me to see companies that approach me and ask for advice,' Skuler said. 'They're all working on the really hard computer science questions. Like, how do I navigate? How do I go upstairs? How do I open a cabinet? How do I take a carton of milk out of the fridge without, you know, wrecking it? But they don't really think of the human interaction in the relationship, especially when it's in the home.' 'Granted, [ElliQ] can't make you a cup of coffee. She's stationary and doesn't do anything beyond talking to you. But I think we've discovered some really, really interesting things. People build an actual relationship with their ElliQ. They see her as a friend, they call her a friend.' 'But more importantly, the project onto her things like trust. We accept her recommendations. And because it's proactive, they follow her recommendations, which is crazy when you think about it. We're talking about people in their 80s and 90s that have never lived with technology.' A big part of engendering that trust in robots is being reliable. The more a home robot is part of your home, the greater the expectations it has to meet. Earlier this year, Mobile ALOHA, a semi-autonomous robot made by Stanford engineers, was able to saute shrimp and perform some other household chores, such as cleaning and putting away dishes, after being trained by humans, who first performed the task a number of times. Cousins says it's a good start, but noted that the robot only had about a 70 percent success rate — and made a huge mess in the kitchen in the process. In order to gain acceptance, home robots will need to be as reliable as any other appliance - if not more. 'If you say, hey, I've got this car, 70% of the time it'll get you to work. Forget it, right? I'll just walk,' Cousins said. '70% doesn't work, 90% doesn't work, right? 95%, even 99% means every three months you have a breakdown on the way to work.' There'a a reason why humanoid robots, from Figure to 1X Elon Musk's Optimus capture the headlines (and funding — around $1.6 billion last year). First, there's the sci-fi aspect. From C-3PO to "The Jetsons" Rosie, some of the most iconic robots have approximated what real live people look like. But, on a more practical level, a humanoid robot can best adapt to how our homes are laid out: They don't have a huge footprint, they can reach things on high shelves, and they can go up and down stairs. In theory. 'What's in common between old people and humanoid robots? They fall down a lot, they break easily, and they're very expensive to fix,' Cousins said. To this point, outside of industrial environments, humanoid robots have yet to really be tested in the real world, and it's going to take a lot of baby steps to get there. Given that robot vacuums only recently learned to navigate around pet poop, it's probably going to take a lot longer for a humanoid robot to surmount one step, let alone a dozen. The footage of the Chinese robots playing soccer was definitely impressive, but a number of them had to be carted off the field in stretchers after toppling over. And, unless you can guarantee that would never happen in a home, it creates a potentially dangerous situation. 'I worry about a six-foot-high humanoid falling on grandma,' Cousins said. 'But I'm terrified of the six foot high humanoid walking up the stairs and the power goes out on it, and it comes down the stairs and takes Grandma out.' "We should build more R2-D2s and less C-3POs." ElliQ's Skuler says that he purposely avoided building something that looked like a human. 'Studies show that if you have eyes, people will, you know, will build a stronger relationship with them,' he said. 'But it's also super creepy. It's super dystopian. And like that's the line between human or unhuman, especially when you add deep fakes and the voice and, you know, soon you'll have like exo skin or whatever. It's like, why? Why would we ever want to build a machine or create technology that can learn to blur the lines between what's real and what's manufactured? Like, why is that a good thing for us as humans?' 'I just think developers are like in love with your science fiction, and are worried they won't be successful in building affinity and sentiment. So they take this shortcut. 'I would say, look at Star Wars,' Skuler said. 'Who's more lovable? Artoo or C-3PO? One looks like a robot has two colored lights and can only beep, and it's super lovable, and C-3PO, you know, is like a stuck up, an entity that's hard to help to and build a friendship with. We should build more R2-D2s and less C-3POs.' 'Today, there's a notable gap between the components that go into toy robots and those that enable industrial robots,' Robson said. 'We need that gap to be filled with optimized solutions that can be mass produced with reliability and performance that are suitable for consumer products at scale.' While humanoid robots may eventually become more present, the more likely scenario is that homes will have a number of cheap robots, each assigned to a specific task, rather than a single, expensive robot that can do everything. Why do you need a $20,000 humanoid Rosie that will push around a vacuum, when we already have a $200 robot vacuum that already does a pretty good job?" Looking at that Chinese store that started selling robots, the cheapest robot on sale comes in at a whopping $287,000. 'Realistically, in the course of 10 years, [robots] are going to be like the cost of another car, and you're going to have to have a reason to justify why you're going to buy this additional thing,' Cousins said. 'You know, like, look, there's almost self- driving on my Tesla, right? It's only $100 a month, but I don't pay for it because it doesn't let me actually read a book while I'm going to work. I have to supervise it.' As with any industry, the cost of building robots will drop as production increases, but will there be enough 10 years from now as to be economically approachable for the average consumer? It's definitely possible, but tricky. Let's take Musk's claim of a million robots a year by 2030. If we look at Tesla's production numbers, it took a little over a decade from the time the company started making cars until the point at which it was making more than a million a year. And, making a self-balancing robot with AI is a helluva lot harder than building a car, even if it is electric. But even then, will consumers feel the need to spend the equivalent of another car payment for a convenience? 'If it crosses the threshold where I don't have to pay attention, now you've given me back an hour a day," Cousins said. "OK maybe I'll pay $100 a month, but it's still kind of a question, right? It's not that hard for me to drive myself to work. It's the same thing with the home. It's like, yeah, it'd be nice if it's free, but I'm not going to pay $40,000 for it.' Fear not: a decade from now, your in-home robot isn't going to suddenly stop folding your clothes and try to murder you in your sleep. That's because you're unlikely to actually own a robot that can actually walk around your house doing your chores, much less go on a homicidal spree. Not only are these robots unlikely to be ready for mass home adoption, but if they are, they'll be too expensive for the average homeowner to afford. And, even if you could foot the bill for one, would you actually want one? Yes, more tasks will be automated. We already have robots that vacuum and mop, robots that mow your lawn, and robots that fold your laundry — and they'll continue to go about their business. We'll most likely have refrigerators that can re-order things for us, too. But how many other in-home tasks are there that would require a full-time robotic nanny? Regardless of the shape or physical design of in-home robots, they're likely to know a lot more about you, so that they can better anticipate your needs. Which is a creepiness of a whole other sort. • Artificial Intelligence • Smart Glasses• Wearable Tech• Smartphones • iPhones• Robots• Cars• TVs


Tom's Guide
2 minutes ago
- Tom's Guide
From hyper-personal assistants to mind-reading tech — this is how AI will transform everything by 2035
Artificial Intelligence | Smart Glasses | Wearable TechSmartphones | iPhones | Robots | Cars | TVs Picture a morning in 2035. Your AI assistant adjusts the lights based on your mood, reschedules your first meeting, reminds your child to take allergy medicine; all without a prompt. It's not science fiction, it's a likely reality driven by breakthroughs in ambient computing, emotional intelligence and agentic AI. Just five years ago, ChatGPT was an unfamiliar name to most, let alone a daily assistant for summarization, search, reasoning and problem-solving. Siri and Alexa were the top names that came to mind when we wanted to call a friend, place an order or dim the lights. Yet now, in 2025, we have a plethora of AI assistants and chatbots to choose from, many of which are free, and which can do a lot more than controlling smart home devices. What feels advanced now may seem utterly simplistic in a decade, reminding us that the most mind-blowing AI capabilities of 2035 might still be beyond our current imagination. By 2035, your AI assistant won't just respond — it will anticipate. This evolution marks the rise of agentic AI, where assistants proactively act on your behalf using predictive analytics, long-term memory and emotion-sensing. These systems can forecast your needs by analyzing historical and real-time data, helping stay one step ahead of your requests. 'Alexa will be able to proactively anticipate needs based on patterns, preferences, and context — preparing your home before you arrive, suggesting adjustments to your calendar when conflicts arise, handling routine tasks before you even ask.' One assistant that's undergoing such a change is Amazon's Alexa. According to Daniel Rausch, Amazon's VP of Alexa and Echo, 'Alexa will be able to proactively anticipate needs based on patterns, preferences, and context — preparing your home before you arrive, suggesting adjustments to your calendar when conflicts arise, or handling routine tasks before you even think to ask.' The AI will remember your child's travel soccer team schedule, reschedule your meetings when it detects stress in your voice and even dim your AR glasses when you appear fatigued. 'By 2035, AI won't feel like a tool you 'use',' Rutgers professor Ahmed Elgammal says. 'It'll be more like electricity or Wi-Fi: always there, always working in the background.' And AIs will respond to more than just your speech. Chris Ullrich, CTO of Cognixion, a Santa Barbara based tech company, is currently developing a suite of AI-powered Assisted Reality AR applications that can be controlled with your mind, your eyes, your head pose, and combinations of these input methods. 'We strongly believe that agent technologies, augmented reality and biosensing technologies are the foundation for a new kind of human-computer interaction,' he says. AI in 2035 will see, hear and sense — offering real-time support tailored to you. With multimodal capabilities, assistants will blend voice, video, text and sensor inputs to understand emotion, behavior and environment. This will create a form of digital empathy. Ullrich notes that these advanced inputs shouldn't aim to replicate human senses, but exceed them. 'In many ways, it's easier to provide superhuman situational awareness with multimodal sensing,' he says. 'With biosensing, real-time tracking of heart rate, eye muscle activation and brain state are all very doable today.' Amazon is already building toward this future. 'Our Echo devices with cameras can use visual information to enhance interactions,' says Rausch. 'For example, determining if someone is facing the screen and speaking enables a more natural conversation without them having to repeat the wake word.' In addition to visual cues, Alexa+ can now pick up on tone and sentiment. 'She can recognize if you're excited or using sarcasm and then adapt her response accordingly,' Rausch says — a step toward the emotionally intelligent systems we expect by 2035. Memory is the foundation of personalization. Most AI today forgets you between sessions. In 2035, contextual AI systems will maintain editable, long-term memory. Codiant, a software company focused on AI development and digital innovation, calls this 'hyper-personalization,' where assistants learn your routines and adjust suggestions based on history and emotional triggers. Rather than relying on one general assistant, you'll manage a suite of specialized AI agents. Research into agentic LLMs shows orchestration layers coordinating multiple AIs; each handling domains like finance, health, scheduling or family planning. These assistants will work together, handling multifaceted tasks in the background. One might track health metrics while another schedules meetings based on your peak focus hours. The coordination will be seamless, mimicking human teams but with the efficiency of machines. Ullrich believes the biggest breakthroughs will come from solving the 'interaction layer,' where user intent meets intelligent response. 'Our focus is on generating breakthroughs at the interaction layer. This is where all these cutting-edge technologies converge,' he explains. Rausch echoes this multi-agent future. 'We believe the future will include a world of specialized AI agents, each with particular expertise,' he says. 'Alexa is positioned as a central orchestrator that can coordinate across specialized agents to accomplish complex tasks.' He continues, 'We've already been building a framework for interoperability between agents with our multi-agent SDK. Alexa would determine when to deploy specialized agents for particular tasks, facilitating communication between them, and bringing their capabilities together into experiences that should feel seamless to the end customer.' Perhaps the most profound shift will be emotional intelligence. Assistants won't just organize your day, they'll help you regulate your mood. They'll notice tension in your voice, anxiety in your posture and suggest music, lighting or a walk. 'Users need to always feel that they're getting tangible value from these systems and that it's not just introducing a different and potentially more frustrating and opaque interface.' Ullrich sees emotion detection as an innovation frontier. 'I think we're not far at all from effective emotion detection,' he says. 'This will enable delight — which should always be a key goal for HMI.' He also envisions clinical uses, including mental health care, where AI could offer more objective insights into emotional well-being. But with greater insight comes greater responsibility. Explainable AI (XAI), as described by arXiv and IBM, will be critical. Users must understand how decisions are made. VeraSafe, a leader in privacy law, data protection, and cybersecurity, underscores privacy concerns like data control and unauthorized use. 'Users need to always feel that they're getting tangible value from these systems and that it's not just introducing a different and potentially more frustrating and opaque interface,' Ullrich says. That emotional intelligence must be paired with ethical transparency, something Rausch insists remains central to Amazon's mission: 'Our approach to trust doesn't change with new technologies or capabilities, we design all of our products to protect our customers' privacy and provide them with transparency and control.' He adds, 'We'll continue to double down on resources that are easy to find and easy to use, like the Alexa Privacy Dashboard and the Alexa Privacy Hub, so that deeper personalization is a trusted experience that customers will love using.' AI may replace jobs, but more so, it will reshape them. An OECD study from 2023 reports that 27% of current roles face high automation risk, especially in repetitive rules-based work. An even more recent Microsoft study highlighted 40 jobs that are most likely to be affected by AI. Human-centric fields like education, healthcare, counseling and creative direction will thrive, driven by empathy, ethics and original thinking. Emerging hybrid roles will include AI interaction designers and orchestrators of multi-agent systems. Writers will co-create with AI, doctors will pair AI with human care and entrepreneurs will scale faster than ever using AI-enhanced tools. AI becomes an amplifier, not a replacement, for human ingenuity. Even the boundaries between work and home will blur. 'While Alexa+ may be primarily focused on home and personal use today, we're already hearing from customers who want to use it professionally as well,' says Rausch. 'Alexa can manage your calendar, schedule meetings, send texts and extract information from documents — all capabilities that can bridge personal and professional environments.' AI becomes an amplifier, not a replacement, for human ingenuity. A 2025 study from the University of Pennsylvania and OpenAI found that 80% of U.S. workers could see at least 10% of their tasks impacted by AI tools, and nearly 1 in 5 jobs could see more than half their duties automated with today's AI. Forbes reported layoffs rippling across major companies like marketing, legal services, journalism and customer service as generative AI takes on tasks once handled by entire teams. Yet the outlook is not entirely grim. As the New York Times reports, AI is also creating entirely new jobs, including: Automation Alley's vision of a 'new artisan' is gaining traction. As AI lifts mental drudgery, skilled manual work — craftsmanship, artistry and hands-on innovation — may see a renaissance. AI won't kill creativity; it may just unlock deeper levels of it. Navigating the shift to an AI-augmented society demands preparation. The World Economic Forum emphasizes lifelong learning, UBI (universal basic income) experimentation and education reform. Workers must develop both technical and emotional skills. Curricula must evolve to teach AI collaboration, critical thinking and data literacy. Social safety nets may be required during reskilling or displacement. Ethics and governance must be built into AI design from the start, not added after harm occurs. Ultimately, the question isn't 'What can AI do?' It's 'What should we let AI do?' Ullrich notes the importance of designing with inclusivity in mind. 'By solving the hard design problems associated with doing this in the accessibility space, we will create solutions that benefit all users,' he says. Technologies developed for accessibility, like subtitles or eye tracking—often lead to mainstream breakthroughs. As IBM and VeraSafe highlight, trust hinges on explainability, auditability and data ownership. Public understanding and control are key to avoiding backlash and ensuring equitable access. As AI augments more aspects of life, our relationship with it will define the outcomes. Daniel Rausch believes the key lies in meaningful connection: 'The goal isn't just responding to commands but understanding your life and meaningfully supporting it.' We must ensure systems are inclusive, transparent and designed for real value. As AI grows in intelligence, the human role must remain centered on judgment, empathy and creativity. Ultimately, the question isn't 'What can AI do?' It's 'What should we let AI do?' By 2035, AI will be a planner, therapist, tutor and teammate. But it will also reflect what we value — and how we choose to interact with it. Ullrich emphasizes that the future won't be defined just by what AI can do for us, but how we engage with it: 'Voice may be useful in some situations, gesture in others, but solutions that leverage neural sensing and agent-assisted interaction will provide precision, privacy and capability that go well beyond existing augmented reality interaction frameworks.' Yet, amid this evolution, a deeper question of trust remains. Emotional intelligence, explainability and data transparency will be essential, not just for usability but for human agency. 'Services that require private knowledge need to justify that there is sufficient benefit directly to the user base,' Ullrich says. 'But if users see this as a fair trade, then I think it's a perfectly reasonable thing to allow.' As AI capabilities rise, we must consciously preserve human ones. The most meaningful advances may not be smarter machines, but more mindful connections between humans and promise of AI is so much more than productivity, it's dignity, inclusion and creativity. If we design wisely, AI won't just help us get more done, it will help us become more of who we are. And that is something worth imagining. • Artificial Intelligence • Smart Glasses• Wearable Tech• Smartphones • iPhones• Robots• Cars• TVs