Need to send a handwritten note? You can hire a robot to write it instead
Turns out, they already are.
The company Handwrytten deploys artificial intelligence to help customers whip up notes and then uses an army of robot scribes, gripping ballpoint pens, to write them.
"The vast, vast, vast majority of the time, you'd never have an idea that it's written by a machine," David Wachs, founder and CEO of the Tempe, Arizona, company, told Business Insider.
After all, we're in a moment in which tech boosters say our digital counterparts will soon free us from work, scrub clean our to-do lists, and wade deeper into our personal lives.
Using technology to recreate the intimacy of a handwritten note also raises questions about authenticity, etiquette, and breaking through the everyday onslaught of emails, DMs, and text messages.
"Everybody's getting so much electronic communication. What really stands out as old-fashioned communication," Wachs said.
He founded Handwrytten in 2014 after leaving a text-messaging startup he'd launched a decade earlier. As he was departing that company, he wanted an easier way to send the handwritten goodbye notes he was drafting for employees and key clients because they would carry more weight than a digital message.
Avoiding the 'uncanny valley'
In order to make sure the letters don't look too perfect, Wachs said the robots vary letter shapes, line spacing, the left margin, and the strokes that join letters together.
"We do all this stuff to try to create the most accurate human writing, without falling into that uncanny valley," Wachs said.
Using robots that can write in nearly three dozen styles of penmanship — some of which carry alliterative names like Enthusiastic Erin and Slanty Steve — the company sends about 20,000 cards a day to customers or, more often, directly to the recipient.
Most of Handwrytten's customers are businesses, though about 20% to 30% are individual consumers, Wachs said. Clients include companies hoping to engage with customers, recruiters looking to soften up executive prospects, and nonprofits that want to stay close to donors. Sales grew about 30% in 2024, Wachs said.
In recent years, the company gave users the option of having AI write all or part of the messages.
"Our slogan has always been 'Your words in pen and ink,' but half the time now it's not your words, it's ChatGPT," he said.
What matters, Wachs said, is that the resulting note looks real to the recipient. He said that many people assume custom digital messages like emails and texts have been written with AI, which, Wachs said, then discounts their effectiveness.
Does it count?
As a tactile throwback, a letter written by a robot is real enough for many Handwrytten customers, Wachs said.
While the intent of a letter meant to look handwritten might be genuine, Lizzie Post, great-great-granddaughter of protocol maven Emily Post and coauthor of the book "Emily Post's Business Etiquette," told BI she believes something is lost by using a robot.
Post said a note that someone actually writes by hand is special, not because it shows effort on the part of the sender, but because a person's penmanship — even if it's imperfect — is unique to them and to a moment.
"It makes that handwritten version that much more precious and amazing and special," Post said.
Wachs said that critics have a point when they say part of writing a letter is to demonstrate that someone took the time to do it. Yet, he said, many people are simply too busy.
"Often, the choice is not Handwrytten note or actual handwritten note. The choice is Handwrytten note or nothing," he said.
Wachs, whose business relies on 55 workers and 185 robots, said that the results are convincing enough to help job seekers, business owners, marketers, and others distinguish themselves.
"My wife will receive notes from her friends that use our service," Wachs said. "And she'll be like, 'Wow, they have beautiful handwriting."
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles
Business Insider
9 minutes ago
- Business Insider
The White House just joined TikTok a month before it's set to be banned (again)
A lot can change in a year — just ask TikTok. Last year, the US government took the extraordinary step of voting to ban the popular app used by millions of Americans, citing national security concerns. On Tuesday, the White House became its latest user. The White House TikTok account launched with a video montage of President Donald Trump narrated by the man himself. "Every day I wake up determined to deliver a better life for the people all across this nation," Trump says over images of him with UFC head Dana White, law enforcement officers, and American workers. "I am your voice!" The account's second post featured various shots of the White House during different seasons. The White House joined the app less than a month before it's set to be banned in the US on September 17 unless it's sold to a US buyer, though that deadline has already been extended several times. "The Trump administration is committed to communicating the historic successes President Trump has delivered to the American people with as many audiences and platforms as possible," Karoline Leavitt, White House press secretary, said in a statement to Business Insider. "President Trump's message dominated TikTok during his presidential campaign, and we're excited to build upon those successes and communicate in a way no other administration has before." The White House did not respond to questions about whether the divest-or-ban deadline would be extended again or if a deal was expected by the deadline. Lawmakers in April 2024 voted to ban TikTok unless its China-based parent company, ByteDance, sold its American assets. Some officials cited concerns that sensitive data belonging to American users could end up in the hands of the Chinese government, and members of Congress have said it could be used for Chinese Communist Party propaganda. TikTok has said it does not share data with the Chinese government. The TikTok divest-or-ban law, signed by President Joe Biden last year, gave TikTok until January 19 to sell or risk shutting down. The app briefly went dark that day for US-based users before coming back online, with TikTok crediting Trump for its return. The White House has said the president does not want TikTok to go dark and prefers it be sold. Trump has delayed the divest-or-ban deadline three times since taking office in January. Commerce Secretary Howard Lutnick told CNBC last month that TikTok will go dark again unless China agrees to a deal that will give Americans control over the app. "We've made the decision. You can't have Chinese control and have something on 100 million American phones," Lutnick said, adding that China's decision would be coming "very soon."

CNET
an hour ago
- CNET
What Worries Americans About AI? Politics, Jobs and Friends
Americans have a lot of worries about artificial intelligence. Like job losses and energy use. Even more so: political chaos. All of that is a lot to blame on one new technology that was an afterthought to most people just a few years ago. Generative AI, in the few years since ChatGPT burst onto the scene, has become so ubiquitous in our lives that people have strong opinions about what it means and what it can do. A Reuters/Ipsos poll conducted Aug. 13-18 and released Tuesday dug into some of those specific concerns. It focused on the worries people had about the technology, and the general public has often had a negative perception. In this survey, 47% of respondents said they believe AI is bad for humanity, compared with 31% who disagreed with that statement. Compare those results with a Pew Research Center survey, released in April, that found 35% of the public believed AI would have a negative impact on the US, versus 17% who believed it would be positive. That sentiment flipped when Pew asked AI experts the same question. The experts were more optimistic: 56% said they expected a positive impact, and only 15% expected a negative one. Don't miss any of CNET's unbiased tech content and lab-based reviews. Add us as a preferred Google source on Chrome. The Reuters/Ipsos poll specifically highlights some of the immediate, tangible concerns many people have with the rapid expansion of generative AI technology, along with the less-specific fears about runaway robot intelligence. The numbers indicate more concern than comfort with those bigger-picture, long-term questions, like whether AI poses a risk to the future of humankind (58% agree, 20% disagree). But even larger portions of the American public are worried about more immediate issues. Foremost among those immediate issues is the potential that AI will disrupt political systems, with 77% of those polled saying they were concerned. AI tools, particularly image and video generators, have the potential to create distorting or manipulative content (known as deepfakes) that can mislead voters or undermine trust in political information, particularly on social media. Most Americans, at 71%, said they were concerned AI would cause too many people to lose jobs. The impact of AI on the workforce is expected to be significant, with some companies already talking about being "AI-first." AI developers and business leaders tout the technology's ability to make workers more efficient. But other polls have also shown how common fears of job loss are. The April Pew survey found 64% of Americans and 39% of AI experts thought there would be fewer jobs in the US in 20 years because of AI. Read more: AI Essentials: 29 Ways You Can Make Gen AI Work for You, According to Our Experts But the Reuters/Ipsos poll also noted two other worries that have become more mainstream: the effect of AI on personal relationships and energy consumption. Two-thirds of respondents in the poll said they were concerned about AI's use as a replacement for in-person relationships. Generative AI's human-like tone (which comes from the fact that it was trained on, and therefore replicates, stuff written by humans) has led many users to treat chatbots and characters as if they were, well, actual friends. This is widespread enough that OpenAI, when it rolled out the new GPT-5 model this month, had to bring back an older model that had a more conversational tone because users felt like they'd lost a friend. Even OpenAI CEO Sam Altman acknowledged that users treating AI as a kind of therapist or life coach made him "uneasy." The energy demands of AI are also significant and a concern for 61% of Americans surveyed. The demand comes from the massive amounts of computing power required to train and run large language models like OpenAI's ChatGPT and Google's Gemini. The data centers that house these computers are like giant AI factories, and they're taking up space, electricity and water in a growing number of places.
Yahoo
an hour ago
- Yahoo
California State University Bets $17 Million on ChatGPT for All Students and Faculty
California State University Bets $17 Million on ChatGPT for All Students and Faculty originally appeared on L.A. Mag. California State University, the nation's largest public four-year system, will make OpenAI's ChatGPT available to all students and faculty starting this year. The effort is controversial, costing CSU almost $17 million, despite already having a $2.3 million budget gap, even with combative measures such as a tuition increase and spending cuts that have decreased course offerings for students. Across its 23 campuses, some CSU students are paying for personal ChatGPT subscriptions, so University officials say their decision to provide AI tools is a matter of equity. CSU wants each student to have equal access to tools and learning opportunities regardless of means or which campus they attend. The rise of AI has altered how students learn and professors teach, as each assignment is at risk of AI overpowering a student's knowledge. AI's ongoing influence has led professors to question the originality of student work, with a dramatic increase in academic misconduct claims, whether a student used the tool or not. AI has also threatened the potential of students in tech majors, making it essential for them to become fluent in ChatGPT. But if you can't beat them, join them. Universities across the country have been establishing deals with OpenAI, even some public institutions. Among these universities are the CSU schools that serve nearly half a million students and have devoted more resources to generative AI than any other public university, both in terms of funding and reach. ChatGPT Edu, an OpenAI chatbot designed for college settings, is provided and tailored to each campus it serves. The academic chatbot offers a diverse range of tools for students and faculty, including access to ChatGPT-5, the company's flagship model, and the ability to make custom AI models. Researchers at Columbia University in New York City even built a prediction tool to assist with decreasing overdose fatalities, which, without the platform, would have taken weeks of research rather than mere seconds. ChatGPT Edu can also be used as a classic study catalyst, assisting students and faculty with their academic needs. The company suggests using personalized tutoring for students, helping with writing grant applications, and assisting faculty with anyone can have a version of ChatGPT for free, the academic version's possibilities are limitless, and the data is kept private and is not used to train future models. More advanced ChatGPT Plus versions range from $20 to $200 a month. In the first half of this year, CSU paid $1.9 million to grant ChatGPT Edu to 40,000 users. Starting in July, the university system paid $15 million for a year's use for 500,000 users, securing a lower cost-per-student than other universities. Despite the major discount, CSU professors still have their concerns. 'For me, it's frightening,' said Kevin Wehr, a sociology professor at Sacramento State and chair of the California Faculty Association's bargaining team. 'I already have all sorts of problems with students engaging in plagiarism. This feels like it takes a shot of steroids and injects it in the arm of that particular beast.'Wehr also cautions that chatbots can often generate 'hallucinations' or inaccurate information, with many responses spreading racial and gender bias. CSU's financial struggles are also still in question. 'We are cutting programs. We are merging campuses. We are laying off faculty. We are making it harder for students to graduate,' Wehr said. And instead of using that money to ameliorate those issues, he added, 'we're giving it to the richest technology companies in the world."However, CSU is hopeful that the new addition will provide equitable access and prepare all students for a digitally advanced future. This story was originally reported by L.A. Mag on Aug 19, 2025, where it first appeared. Solve the daily Crossword



