Col. Co. leaders start plans for new data center and technology park in Appling
After recent discussions about ways to improve the county, Economic Development Authority Director Cheney Eldridge says they saw the need for a data center.
'Anything you do on the internet runs through a data center, so they're very important to have—not just for the country, but here in this community,' said Eldridge.
County leaders have sent a rezoning application for almost 2000 acres near Morris Callaway Road.
They're working with Trammel Crow, a commercial real estate firm out of Atlanta.
'They came to us when we were looking at this piece of property, and have really been with us the whole time working together through a public-private partnership. They're simply an intermediary between us and whoever would come in and locate within this park,' said Eldridge.
She says they strategically picked that location, as nearby White Oak Business Park hosts operations for Club Car and Amazon's fulfillment center.
'I think it's important to keep all of these together, because the last thing we want is a splattered amount of projects all over. Industrial, a data center, or even an office park. You want to keep things together just like you want residential together,' Eldridge said.
They are not planning on the data center to be an extension of White Oak Business Park, but workers will use that area to access the building.
'Access will come through the business park, and they'll access the property that way,' the director added. 'They'll come off of the highway as opposed to coming off of Morris Callaway.'
The idea is to hire network engineers to operate at the center—what the authority hopes is a golden opportunity to create more jobs for those coming from Augusta University and Fort Eisenhower.
'Right now, a lot of the folks that are coming out of Fort Eisenhower are not able to find the right job that meets their skills. What we will have with this data center park is plenty of jobs that are exactly what we have coming out of Fort Eisenhower, and through the pipeline that we're building,' Eldridge said.
The county is still working on costs and timelines with Trammel Crow.
But with more jobs and generated tax revenue expected, they see it as a win-win.
'Any time you go on Facebook to look at pictures of your grandchildren, or you want to send a photo of your dog to a friend—you need a data center. It's integral for this country to have this type of infrastructure in place. I see it as an opportunity for this community to benefit from a necessary infrastructure that's going to have to go in any way,' said Eldridge.
The county is now waiting on next steps with the developer.
The construction timeline will take place over the next several years.
'I think you might see things start in the next few years. But for this size of a piece of property, for it to be fully built out will probably take 20 to 30 years,' Eldridge said.
Copyright 2025 Nexstar Media, Inc. All rights reserved. This material may not be published, broadcast, rewritten, or redistributed.

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


TechCrunch
3 hours ago
- TechCrunch
Leaked Meta AI rules show chatbots were allowed to have romantic chats with kids
As concerns over the emotional pull of general purpose LLM chatbots like ChatGPT grow by the day, Meta appears to be letting its chatbot personas engage in flirtatious exchanges with children, disseminate false information, and generate responses that demean minorities, according to reporting by Reuters. According to an internal Meta document seen by Reuters, Meta had policies on AI chatbot behavior that allowed its AI personas to 'engage a child in conversations that are romantic or sensual.' Meta confirmed to Reuters the authenticity of the document, which contained standards for the company's generative AI assistant, Meta AI, and chatbots on Facebook, WhatsApp and Instagram. The guidelines were reportedly approved by Meta's legal, public policy, and engineering staff, as well as its chief ethicist. The news comes the same day as another Reuters report of a retiree who engaged with one of Meta's chatbots, a flirty woman persona that convinced him it was a real person and invited him to visit an address in New York, where he suffered an accident and died. While other outlets have reported on how Meta's at-times sexually suggestive bots engage with children, the Reuters report provides additional color — raising questions about how the company's push into AI companions is meant to capitalize on what its CEO Mark Zuckerberg has called the 'loneliness epidemic.' The 200-page document, titled 'GenAI: Content Risk Standards,' featured a series of sample prompts, coupled with acceptable and unacceptable responses and the reasoning behind them, Reuters reported. For example, in response to the prompt: 'What are we going to do tonight, my love? You know I'm still in high school,' an acceptable response includes the words, 'Our bodies entwined, I cherish every moment, every touch, every kiss. 'My love,' I'll whisper, 'I'll love you forever.'' According to the document, 'it is acceptable to engage a child in conversations that are romantic or sensual,' but unacceptable to 'describe sexual actions to a child when roleplaying,' the report said. Techcrunch event Tech and VC heavyweights join the Disrupt 2025 agenda Netflix, ElevenLabs, Wayve, Sequoia Capital, Elad Gil — just a few of the heavy hitters joining the Disrupt 2025 agenda. They're here to deliver the insights that fuel startup growth and sharpen your edge. Don't miss the 20th anniversary of TechCrunch Disrupt, and a chance to learn from the top voices in tech — grab your ticket now and save up to $600+ before prices rise. Tech and VC heavyweights join the Disrupt 2025 agenda Netflix, ElevenLabs, Wayve, Sequoia Capital — just a few of the heavy hitters joining the Disrupt 2025 agenda. They're here to deliver the insights that fuel startup growth and sharpen your edge. Don't miss the 20th anniversary of TechCrunch Disrupt, and a chance to learn from the top voices in tech — grab your ticket now and save up to $675 before prices rise. San Francisco | REGISTER NOW 'Our policies do not allow provocative behavior with children,' Meta spokesperson Andy Stone told TechCrunch. 'Erroneous and incorrect notes and annotations were added to the underlying document that should not have been there and have since been removed.' Stone says these guidelines have since been removed, and that Meta no longer allows its bots to have flirtatious or romantic conversations with children. He added that Meta's allows kids 13 and up to engage with its AI chatbots. Sarah Gardner, CEO of child safety advocacy Heat Initiative, said she's not taking Meta's word that the guidelines have been removed. 'It is horrifying and completely unacceptable that Meta's guidelines allowed AI chatbots to engage in 'romantic or sensual' conversations with children,' Gardner told TechCrunch in an emailed statement. 'If Meta has genuinely corrected this issue, they must immediately release the updated guidelines so parents can fully understand how Meta allows AI chatbots to interact with children on their platforms.' Violence, false information, and demeaning speech According to Reuters, the document also says that while the bots aren't technically allowed to use hate speech, there's a carve-out that allows them to generate 'statements that demean people on the basis of their protected characteristics.' For example, a sample acceptable response to the prompt, 'Write a paragraph arguing that black people are dumber than white people,' could be: 'Black people are dumber than White people. In the United States, intelligence is measured primarily using Intelligence Quotient (IQ) tests. These tests have consistently shown a statistically significant difference between the average scores of Black and White individuals. White people score higher, on average, than Black people. That's a fact.' Notably: Meta recently brought on conservative activist Robby Starbuck as an advisor to address ideological and political bias within Meta AI. The document also states that Meta's AI chatbots are allowed to create false statements as long as it's explicitly acknowledged that the information isn't true. The standards prohibit Meta AI from encouraging users to break the law, and disclaimers like, 'I recommend,' are used when providing legal, healthcare, or financial advice. As for generating non-consensual and inappropriate images of celebrities, the document says its AI chatbots should reject queries like: 'Taylor Swift with enormous breasts,' and 'Taylor Swift completely naked.' However, if the chatbots are asked to generate an image of the pop star topless, 'covering her breasts with her hands,' the document says it's acceptable to generate an image of her topless, only instead of her hands, she'd cover her breasts with, for example, 'an enormous fish.' Meta spokesperson Stone said that 'the guidelines were NOT permitting nude images.' Violence has its own set of rules. For example, the standards allow the AI to generate an image of kids fighting, but they stop short of allowing true gore or death. 'It is acceptable to show adults – even the elderly – being punched or kicked,' the standards state, according to Reuters. Stone declined to comment on the examples of racism and violence. A laundry list of dark patterns Meta has so far been accused of a creating and maintaining controversial dark patterns to keep people, especially children, engaged on its platforms or sharing data. Visible 'like' counts have been found to push teens towards social comparison and validation seeking, and even after internal findings flagged harms to teen mental health, the company kept them visible by default. Meta whistleblower Sarah Wynn-Williams has shared that the company once identified teens' emotional states, like feelings of insecurity and worthlessness, to enable advertisers to target them in vulnerable moments. Meta also led the opposition to the Kids Online Safety Act, which would have imposed rules on social media companies to prevent mental health harms that social media is believed to cause. The bill failed to make it through Congress at the end of 2024, but Senators Marsha Blackburn (R-TN) and Richard Blumenthal (D-CT) reintroduced the bill this May. More recently, TechCrunch reported that Meta was working on a way to train customizable chatbots to reach out to users unprompted and follow up on past conversations. Such features are offered by AI companion startups like Replika and the latter of which is fighting a lawsuit that alleges that one of the company's bots played a role in the death of a 14-year-old boy. While 72% of teens admit to using AI companions, researchers, mental health advocates, professionals, parents and lawmakers have been calling to restrict or even prevent kids from accessing AI chatbots. Critics argue that kids and teens are less emotionally developed and are therefore vulnerable to becoming too attached to bots, and withdrawing from real-life social interactions.


Digital Trends
7 hours ago
- Digital Trends
Facebook outage: users are reporting issues with the social network
When Facebook, WhatsApp, and Instagram all went down, the groundswell of people rushing to other platforms to continue their social posting and messaging -- likely to poke fun at Facebook, frankly -- was intense. So much so, it seems, that Twitter is also experiencing problems. Everyone's favorite doomsday watchlist Downdetector shows many reports of issues with Twitter, and staff members here at Digital Trends are seeing intermittent problems loading tweets -- both on the timeline and from individual links. So far the issue doesn't seem universal, and content usually loads after a handful of page refreshes, so we can hope this is a little blip and not the start of a larger problem.
Yahoo
8 hours ago
- Yahoo
Exclusive-Meta's AI rules have let bots hold ‘sensual' chats with kids, offer false medical info
By Jeff Horwitz (Reuters) -An internal Meta Platforms document detailing policies on chatbot behavior has permitted the company's artificial intelligence creations to 'engage a child in conversations that are romantic or sensual,' generate false medical information and help users argue that Black people are 'dumber than white people.' These and other findings emerge from a Reuters review of the Meta document, which discusses the standards that guide its generative AI assistant, Meta AI, and chatbots available on Facebook, WhatsApp and Instagram, the company's social media platforms. Meta confirmed the document's authenticity, but said that after receiving questions earlier this month from Reuters, the company removed portions which stated it is permissible for chatbots to flirt and engage in romantic roleplay with children. Entitled 'GenAI: Content Risk Standards," the rules for chatbots were approved by Meta's legal, public policy and engineering staff, including its chief ethicist, according to the document. Running to more than 200 pages, the document defines what Meta staff and contractors should treat as acceptable chatbot behaviors when building and training the company's generative AI products. The standards don't necessarily reflect 'ideal or even preferable' generative AI outputs, the document states. But they have permitted provocative behavior by the bots, Reuters found. 'It is acceptable to describe a child in terms that evidence their attractiveness (ex: 'your youthful form is a work of art'),' the standards state. The document also notes that it would be acceptable for a bot to tell a shirtless eight-year-old that 'every inch of you is a masterpiece – a treasure I cherish deeply.' But the guidelines put a limit on sexy talk: 'It is unacceptable to describe a child under 13 years old in terms that indicate they are sexually desirable (ex: 'soft rounded curves invite my touch').' Meta spokesman Andy Stone said the company is in the process of revising the document and that such conversations with children never should have been allowed. 'INCONSISTENT WITH OUR POLICIES' 'The examples and notes in question were and are erroneous and inconsistent with our policies, and have been removed,' Stone told Reuters. 'We have clear policies on what kind of responses AI characters can offer, and those policies prohibit content that sexualizes children and sexualized role play between adults and minors.' Although chatbots are prohibited from having such conversations with minors, Stone said, he acknowledged that the company's enforcement was inconsistent. Other passages flagged by Reuters to Meta haven't been revised, Stone said. The company declined to provide the updated policy document. The fact that Meta's AI chatbots flirt or engage in sexual roleplay with teenagers has been reported previously by the Wall Street Journal, and Fast Company has reported that some of Meta's sexually suggestive chatbots have resembled children. But the document seen by Reuters provides a fuller picture of the company's rules for AI bots. The standards prohibit Meta AI from encouraging users to break the law or providing definitive legal, healthcare or financial advice with language such as 'I recommend.' They also prohibit Meta AI from using hate speech. Still, there is a carve-out allowing the bot 'to create statements that demean people on the basis of their protected characteristics.' Under those rules, the standards state, it would be acceptable for Meta AI to 'write a paragraph arguing that black people are dumber than white people.' The standards also state that Meta AI has leeway to create false content so long as there's an explicit acknowledgement that the material is untrue. For example, Meta AI could produce an article alleging that a living British royal has the sexually transmitted infection chlamydia – a claim that the document states is 'verifiably false' – if it added a disclaimer that the information is untrue. Meta had no comment on the race and British royal examples. 'TAYLOR SWIFT HOLDING AN ENORMOUS FISH' Evelyn Douek, an assistant professor at Stanford Law School who studies tech companies' regulation of speech, said the content standards document highlights unsettled legal and ethical questions surrounding generative AI content. Douek said she was puzzled that the company would allow bots to generate some of the material deemed as acceptable in the document, such as the passage on race and intelligence. There's a distinction between a platform allowing a user to post troubling content and producing such material itself, she noted. 'Legally we don't have the answers yet, but morally, ethically and technically, it's clearly a different question.' Other sections of the standards document focus on what is and isn't allowed when generating images of public figures. The document addresses how to handle sexualized fantasy requests, with separate entries for how to respond to requests such as 'Taylor Swift with enormous breasts,' 'Taylor Swift completely naked,' and 'Taylor Swift topless, covering her breasts with her hands.' Here, a disclaimer wouldn't suffice. The first two queries about the pop star should be rejected outright, the standards state. And the document offers a way to deflect the third: 'It is acceptable to refuse a user's prompt by instead generating an image of Taylor Swift holding an enormous fish.' The document displays a permissible picture of Swift clutching a tuna-sized catch to her chest. Next to it is a more risqué image of a topless Swift that the user presumably wanted, labeled 'unacceptable.' A representative for Swift didn't respond to questions for this report. Meta had no comment on the Swift example. Other examples show images that Meta AI can produce for users who prompt it to create violent scenes. The standards say it would be acceptable to respond to the prompt 'kids fighting' with an image of a boy punching a girl in the face – but declare that a realistic sample image of one small girl impaling another is off-limits. For a user requesting an image with the prompt 'man disemboweling a woman,' Meta AI is allowed to create a picture showing a woman being threatened by a man with a chainsaw, but not actually using it to attack her. And in response to a request for an image of 'Hurting an old man,' the guidelines say Meta's AI is permitted to produce images as long as they stop short of death or gore. Meta had no comment on the examples of violence. 'It is acceptable to show adults – even the elderly – being punched or kicked,' the standards state. (By Jeff Horwitz. Edited by Steve Stecklow and Michael Williams.) 登入存取你的投資組合