logo
Cryptocurrency Live News & Updates : Ethereum's Position Strengthened by Regulatory Shifts

Cryptocurrency Live News & Updates : Ethereum's Position Strengthened by Regulatory Shifts

Time of India4 days ago
06 Aug 2025 | 11:55:12 PM IST
Figment's CEO Lorien Gabel stated that recent U.S. regulatory changes, including the market structure bill, clarify Ethereum's role and enhance its use cases, alongside a surge in stablecoin trading and Ethereum transactions. In the latest cryptocurrency news, Figment's CEO Lorien Gabel emphasized Ethereum's strengthened position amid evolving U.S. regulations, particularly with the market structure bill that clarifies its use cases. Meanwhile, OpenAI has allowed share sales at a staggering $500 billion valuation, showcasing the growing intersection of technology and finance. On the trading front, Pepe's price is consolidating at a crucial support level, indicating potential for a bullish reversal if it can maintain momentum. Additionally, BNB has surpassed 770 USDT, reflecting a 2.67% increase in just 24 hours. However, the crypto landscape is not without its challenges, as users face risks of having their accounts frozen due to compliance issues, even when they have done nothing wrong. The importance of understanding the provenance of funds is highlighted, as interactions with 'tainted' wallets can lead to significant consequences. As the industry matures, compliance and risk management are becoming essential for all crypto participants, underscoring the need for proactive measures to safeguard assets. Show more
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Billions flow to new hedge funds focused on AI-related bets
Billions flow to new hedge funds focused on AI-related bets

Mint

time3 hours ago

  • Mint

Billions flow to new hedge funds focused on AI-related bets

Leopold Aschenbrenner emerged last year as a precocious artificial-intelligence influencer after publishing a widely read manifesto. Then he decided to try his hand at stock picking. The 23-year-old with no professional investing experience quickly raised more money for a hedge fund than most pedigreed portfolio managers can when they strike out on their own. As valuations of Nvidia, OpenAI and other artificial-intelligence companies continue to soar, so do investments in hedge funds hoping to ride the AI wave. Aschenbrenner's San Francisco-based firm, Situational Awareness, now manages more than $1.5 billion, according to people familiar with the matter. He has described the firm as a 'brain trust on AI." His strategy involves betting on global stocks that stand to benefit from the development of AI technology, such as semiconductor, infrastructure and power companies, along with investments in a few startups, including Anthropic. He told investors he plans to offset those with smaller short bets on industries that could get left behind. Situational Awareness gained 47% after fees in the first half of the year, one of the people said. In the same period, the S&P 500 gained about 6%, including dividends, while an index of tech hedge funds compiled by research firm PivotalPath gained about 7%. Aschenbrenner, a native of Germany, briefly worked as a researcher at OpenAI before being pushed out. He named Situational Awareness after the 165-page essay he wrote about the promise and risks of artificial superintelligence. He recruited Carl Shulman, another AI intellectual who used to work at Peter Thiel's macro hedge fund, as director of research. The firm's backers include Patrick and John Collison, the billionaire brothers who founded payments company Stripe, as well as Daniel Gross and Nat Friedman, whom Mark Zuckerberg recently recruited to help run Meta's AI efforts. Graham Duncan, a well-known investor who organizes the Sohn Investment Conference, is an adviser. 'We're going to have way more situational awareness than any of the people who manage money in New York," Aschenbrenner told podcaster Dwarkesh Patel last year. 'We're definitely going to do great on investing." In another sign of the demand for Aschenbrenner's services, many investors agreed to lock up their money with him for years. Other recent launches include an AI-focused hedge fund from Value Aligned Research Advisors, a Princeton, N.J.-based investment firm founded by former quants Ben Hoskin and David Field. The fund, launched in March, has already amassed about $1 billion in assets, a person familiar with it said. VAR also manages about $2 billion in other AI-focused investment strategies. VAR's investors have included the philanthropic foundation of Facebook co-founder Dustin Moskovitz, according to regulatory filings reviewed by fund-data tracker Old Well Labs. Veteran hedge-fund firms are entering the fray, too. Last year, Steve Cohen tapped one of his portfolio managers at Point72 Asset Management, Eric Sanchez, to start an AI-focused hedge fund that Cohen planned to stake with $150 million of his own money. Assets at the fund, called Turion—after AI theorist Alan Turing—now exceed $2 billion, people familiar with the matter said. Turion is up about 11% this year through July after it gained about 7% last month, the people said. It is no surprise that thematic funds are springing up to capitalize on the AI frenzy. In years past, hedge funds that specialized in the transition to clean energy and investing with an environmental, social and corporate-governance lens proliferated in response to client demand. Identifying a winning theme isn't the same thing as trading it well. Investors' tastes can be fickle; many prominent ESG hedge funds have either shrunk or gone out of business. The market swoon that followed the January release of an advanced, low-cost language model from Chinese company DeepSeek showed the fragility of the valuations of AI winners, though the market has roared back since then. AI-focused investors argue the long-term trend of development and adoption are inevitable, even if there are bumps along the way. With only so many publicly traded companies that operate in the AI-adjacent economy today, stock picking funds often pile into the same positions as one another and more generalist hedge funds. Vistra, a power producer that supplies the juice to AI data centers, was a top-three U.S. position of both Situational Awareness and VAR Advisors as of March 31, according to their most recent securities filings. Other hedge-fund managers are debuting funds to make investments in privately held AI companies and startups. Gavin Baker's Atreides Management teamed up with Valor Equity Partners to launch a venture-capital fund earlier this year that has raised millions from investors including Oman's sovereign-wealth fund. Each firm separately invested in Elon Musk's xAI. At least one portfolio manager is planning an AI hedge fund as a comeback vehicle. Sean Ma wound down his Hong Kong-based firm, Snow Lake Capital, after it agreed to pay about $2.8 million to settle Securities and Exchange Commission charges last year that the firm participated in stock offerings of companies that it had also bet against. Ma took over an investment firm called M37 Management in Menlo Park, Calif., earlier this year. He is currently fundraising for a hedge fund focused on AI software and hardware.

OpenAI bans ChatGPT from answering breakup questions; Sam Altman calls the new update 'annoying'
OpenAI bans ChatGPT from answering breakup questions; Sam Altman calls the new update 'annoying'

Time of India

time5 hours ago

  • Time of India

OpenAI bans ChatGPT from answering breakup questions; Sam Altman calls the new update 'annoying'

OpenAI is adjusting ChatGPT's approach to sensitive emotional queries, shifting from direct advice to facilitating user self-reflection. This change addresses concerns about AI's impact on mental well-being and aims to provide more thoughtful support. OpenAI is consulting experts and implementing safeguards like screen-time reminders and distress detection to ensure responsible AI interaction. Nowadays, many of us have turned AI platforms into a quick guidance source for everything from code to personal advice. But as artificial intelligence becomes a greater part of our emotional lives, companies are becoming aware of the risks of over-reliance on it. But can a chatbot truly understand matters of the heart and emotions? With growing concerns about how AI might affect mental well‑being, OpenAI is making a thoughtful shift in how ChatGPT handles sensitive personal topics. Rather than giving direct solutions to tough emotional questions, the AI will now help users reflect on their feelings and come to their own conclusions. OpenAI has come up with significant changes OpenAI has announced a significant change to how ChatGPT handles relationship questions. Instead of offering direct answers like 'Yes, break up,' the AI will now help users think through their dilemmas by giving self-reflection and weighing pros and cons, particularly for high-stakes personal issues. This comes as there have been several issues over AI getting too direct in emotionally sensitive areas. According to reports from The Guardian, OpenAI stated, 'When you ask something like: 'Should I break up with my boyfriend?' ChatGPT shouldn't give you an answer. It should help you think it through—asking questions, weighing pros and cons.' The company has also said that 'new behaviour for high‑stakes personal decisions is rolling out soon. We'll keep tuning when and how they show up so they feel natural and helpful,' according to OpenAI's statement via The Guardian. To ensure this isn't just window dressing, OpenAI is gathering an advisory group of experts in human-computer interaction, youth development, and mental health. The company said in a blog post, 'We hold ourselves to one test: if someone we love turned to ChatGPT for support, would we feel reassured? Getting to an unequivocal 'yes' is our work' OpenAI CEO says that the new update is.... This change follows user complaints about ChatGPT's earlier personality tweaks. According to the Guardian, CEO Sam Altman admitted that recent updates made the bot 'too sycophant‑y and annoying.' He said, 'The last couple of GPT‑4o updates have made the personality too sycophant‑y and annoying (even though there are some very good parts of it), and we are working on fixes asap, some today and some this week.' Altman also teased future options for users to choose different personality modes OpenAI is also implementing mental health safeguards. Updates will include screen-time reminders during long sessions, better detection of emotional distress, and links to trusted support when needed

Google, schmoogle: When to ditch web search for deep research
Google, schmoogle: When to ditch web search for deep research

Mint

time6 hours ago

  • Mint

Google, schmoogle: When to ditch web search for deep research

Searching for the perfect electric car could have taken hours. Instead, I opened ChatGPT, clicked the deep research button and walked away from my computer. By the time I'd made coffee, ChatGPT delivered an impressive 6,800-word report. This year, ChatGPT and other popular AI chatbots introduced advanced research modes. When activated, the AI goes beyond basic chat, taking more time, examining more sources and composing a more thorough response. In short: It's just more. Now free users can access this feature, with limits. Recent upgrades, such as OpenAI's latest GPT-5 model, have made research even more powerful. For the past few months, I've experimented with deep research for complicated questions involving big purchases and international trip planning. Could a robot-generated report help me make tough decisions? Or would I end up with 6,000-plus words of AI nonsense? The bots answered questions I didn't think to ask. Though they occasionally led me astray, I realized my days of long Google quests were likely over. This is what I learned about what to deep research, which bots work best and how to avoid common pitfalls. Deep research is best for queries with multiple factors to weigh. (If you're just getting started, hop to my AI beginner's guide first, then come back.) For my EV journey, I first sought advice from my colleagues Joanna and Dan. But I needed to dig deeper for my specific criteria including a roomy back row for a car seat, a length shorter than my current SUV and a battery range that covers a round trip to visit my parents. I fed my many needs into several chatbots. When I hit enter, the AI showed me their 'thinking." First, they made a plan. Then, they launched searches. Lots of searches. In deep research mode, AI repeats this cycle—search then synthesize—multiple times until satisfied. Occasionally, though, the bot can get stuck in its own rabbit hole and you need to start over. Results varied. Perplexity delivered the quickest results, but hallucinated an all-wheel drive model that doesn't exist. Copilot and Gemini provided helpful tables. ChatGPT took more time because it asked clarifying questions first—a clever way to narrow the scope and personalize the report. Claude analyzed the most sources: 386. Deep research can take 30 minutes to complete. Turn on notifications so the app can let you know when your research is ready. My go-to bot is typically Claude for its strong privacy defaults. But for research, comparing results across multiple services proved most useful. Models that appeared on every list became our top contenders. Now I'm about to test drive a Kia Niro, and potentially spend tens of thousands based on a robot's recommendation. Basic chat missed the mark, proposing two models that are too big for parallel parking on city streets. Other successful deep research queries included a family-friendly San Francisco trip itinerary, a comparison of popular 529 savings plans, a detailed summary of scientific consensus on intermittent fasting and a guide to improving my distance swimming. On ChatGPT and Claude, you can add your Google Calendar and other accounts as sources, and ask the AI to, for example, plan activities around your schedule. Deep research isn't always a final answer, but it can help you get there. Ready for AI to do your research? Switch on the 'deep research" or 'research" toggle next to the AI chat box. ChatGPT offers five deep research queries a month to free users, while Perplexity's free plan includes five daily. Copilot, Gemini and Grok limit free access, but don't share specifics. Paid plans increase limits and offer access to more advanced models. Claude's research mode requires a subscription. Here are tips for the best results: Be specific. Give the AI context (your situation and your goal), requirements (must-haves) and your desired output (a report, bullets or a timeline). Chatbots can't read your mind…yet. Enable notifications. Deep research takes time. Turn on notifications so the app can ping you when your response is ready. Verify citations. AI can still make mistakes, so don't copy its work. Before making big decisions, click on citations to check source credibility and attribution. Summarize the output. Reports can be long. Ask for a scannable summary or table, then dive into the full text for details. Understand limitations. The information is only as good as its sources. These chatbots largely use publicly available web content. They can't access paywalled stuff, so think of it as a launchpad for further investigation. Whatever the imperfections of deep research, it easily beats hours and days stuck in a Google-search black hole. I have a new research partner, and it never needs a coffee break. News Corp, owner of Dow Jones Newswires and The Wall Street Journal, has a content-licensing partnership with OpenAI. Last year, the Journal's parent company, Dow Jones, sued Perplexity for copyright infringement.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store