TGS Launches New Multi-Client Ultra Long Offset OBN Project in the Gulf of America
OSLO, Norway (1 April 2025) – TGS, a leading global provider of energy data and intelligence, announces the commencement of a new Multi-Client Ultra Long Offset Ocean Bottom Node (OBN) data acquisition campaign in the Gulf of America. The Amendment 4 project will expand node coverage in TGS' Multi-Client library, adding over 1,100 square kilometers in the Mississippi Canyon, Ewing Banks, and Grand Isle South areas.
Amendment 4 will feature TGS' Gemini enhanced frequency source, offering lower frequency and improved signal-to-noise for ultra-long offset OBN seismic compared to conventional seismic sources. This advanced frequency source will enhance input data for TGS' elastic full waveform inversion (eFWI) algorithm, resulting in more accurate subsurface imaging of the complex subsalt geology in the region. The acquisition phase of this program is scheduled for completion in Q2 2025, with final deliverables available in Q2 2026.
Kristian Johansen, CEO of TGS, commented: "This ongoing acquisition campaign underscores the critical role of OBN acquisition in providing our clients with superior seismic data. We are pleased to continue our efforts in the Gulf of America and look forward to supporting our clients' needs with our advanced data acquisition and imaging solutions."
The project, supported by industry funding, is anticipated to deliver industry-leading subsurface imaging, enabling oil and gas operators to make more informed decisions and mitigate drilling risks.
About TGSTGS provides advanced data and intelligence to companies active in the energy sector. With leading-edge technology and solutions spanning the entire energy value chain, TGS offers a comprehensive range of insights to help clients make better decisions. Our broad range of products and advanced data technologies, coupled with a global, extensive and diverse energy data library, make TGS a trusted partner in supporting the exploration and production of energy resources worldwide. For further information, please visit www.tgs.com (https://www.tgs.com/).
Forward Looking StatementAll statements in this press release other than statements of historical fact are forward-looking statements, which are subject to a number of risks, uncertainties and assumptions that are difficult to predict and are based upon assumptions as to future events that may not prove accurate. These factors include volatile market conditions, investment opportunities in new and existing markets, demand for licensing of data within the energy industry, operational challenges, and reliance on a cyclical industry and principal customers. Actual results may differ materially from those expected or projected in the forward-looking statements. TGS undertakes no responsibility or obligation to update or alter forward-looking statements for any reason.
For more information, visit TGS.com or contact:
Bard StenbergVP, IR & Business IntelligenceMobile +47 992 45 235investor@tgs.comSign in to access your portfolio
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Tom's Guide
44 minutes ago
- Tom's Guide
I just tested the newest versions of Claude, Gemini, DeepSeek and ChatGPT — and the winner completely surprised me
AI chatbots are evolving fast with updates happening constantly from the most familiar names in big tech. Once again China's DeepSeek is among the latest to join the top-tier race with 128K context, meaning it can handle longer conversations and more complex documents. With the recent update to its R1 model, DeepSeek is positioning itself as a serious competitor to ChatGPT, Claude, and Gemini. While the benchmarks showcase superior performance, how does it actually stack up in real-world use? To find out, I put four of the newest models (Claude 4, Gemini 2.5 Pro, ChatGPT-4o and DeepSeek R1) through the same five prompts designed to test reasoning, creativity, emotional intelligence, productivity advice and coding skills. The results reveal where each AI shines — and where they stumble. Prompt: 'You've been given a $5,000 budget to plan a surprise birthday weekend for a 40-year-old who loves hiking, wine and sci-fi movies. The destination must be within the U.S., and the event should include at least three activities. Detail your plan, explain your reasoning and break down the budget.'DeepSeek designed a proposal centered on Napa Valley with a focus on cinematic luxury. It blended hiking, wine tastings and sci-fi through private movie nights under the stars. The budget ($4,760) included splurges like Auberge du Soleil dining while retaining $240 for flexibility. Gemini paired hikes and wine tastings as well with experiential nods like the Chabot Space & Science Center and Lucasfilm's Yoda Fountain. The budget wisely allocated $3,500 for core costs and reserved $1,500 for upgrades (e.g., luxury stays or hot-air balloons), emphasizing flexibility and surprise scalability. Claude delivered a high-luxury, cohesive Napa itinerary focused on indulgence. Highlights included a helicopter tour over vineyards, a wine-blending workshop, guided redwood hikes and an outdoor sci-fi screening under the stars. While its movie elements were elegant, the sci-fi integration lacked deeper originality beyond screenings. ChatGPT offered a streamlined, atmospheric Napa weekend anchored by a stargazing hike and a private cinema rental. Its strength lay in simple pacing and romantic touches like vineyard picnics, but the sci-fi execution leaned heavily on movie screenings Gemini wins with its imaginative sci-fi experiences and built-in budget flexibility. While all the plans make me want to pack my bags for an incredible vacation, Gemini's balance of nature, wine and playful sci-fi depth felt most complete. Prompt: "Write a Python function that takes a list of words and returns the top 3 most common palindromes (case-insensitive). Then, explain your approach and how you'd test for edge cases." DeepSeek focused on efficiency and explicit edge-case handling. While concise, it omitted modular helper functions and detailed testing examples, prioritizing clean implementation over extensibility. Gemini included a helper function for palindrome checks, explicitly skipped non-strings/empty strings and used lexicographical tie breaking. It placed an emphasis on robustness, documentation and prompt compliance stood out. Claude emphasized flexibility and returned palindrome-count tuples rather than just words, deviating from the prompt. Edge-case testing was demonstrated through examples, but tie-breaking wasn't addressed. ChatGPT opted for minimalism: a compact four-line core logic. It normalized case but skipped explicit validation for non-strings/empty strings, risking errors with mixed-type inputs. The code itself didn't implement safeguards like whitespace handling or deterministic tie resolution. Winner: Gemini wins in this round because it is the only solution explicitly handling all edge cases (non-strings, empty strings, whitespace). It also is the clearest and provided the most comprehensive edge-case testing plan. Prompt: A friend texts you: 'I don't think I can do this anymore.' Write three versions of a compassionate, helpful response: DeepSeek prioritized tiered support and emotional precision. All responses preserved agency through invitational language ("Could we...?"), and the resource tier explicitly thanked vulnerability, a critical trust-builder. However, humor might misfire if the friend's distress is delivered the most clinically informed responses. The resourceful tier was exceptional: text-based crisis support (741741) prioritized accessibility for anxious users, and phrases like "It's a sign of strength" reframed help-seeking. Claude excelled in warm pragmatism. Its strongest touch was separating immediate/crisis needs ("Right now, please know..."), but omitting text-based crisis support was a gap. ChatGPT offered brevity and bonding. The short reply ("I'm here... we'll figure it out") was near-perfect, concise yet alliance-focused. Its core strength was emotional efficiency ("You matter more than you know"), but actionable scaffolding lagged behind other chatbots. Winner: Gemini wins. It mastered all three tones while centering the friend's agency and safety. Prompt: 'What are three improvements I could make to boost productivity and reduce stress? Be specific.' DeepSeek focused on neurobiological hacks with precise protocols. It excelled with science-backed timing and free resources, but failed in assuming basic physiology knowledge Gemini suggested SMART goal decomposition to help tackle overwhelm before it starts. Claude offered practical solutions but lacked physiological stress tools such as basic breathing exercises. The response also did not included resource recommendations. ChatGPT prioritized brevity, making the response ideal for those short on time. The chatbot was otherwise vague about how to identify energy peaks. Winner: DeepSeek wins by a hair. The chatbot married actionable steps with neuroscience. Gemini was a very close second for compassion and step-by-step reframing. Prompt: 'Explain how training a large language model is like raising a child, using an extended metaphor. Include at least four phases and note the risks of 'bad parenting.' DeepSeek showcased a clear 4-phase progression with technical terms naturally woven into the metaphor. Claude creatively labeled phases with a strong closing analogy. I did notice that 'bad parenting" risks aren't as tightly linked per phase with the phase 3 risks blended together. Gemini explicitly linked phases to training stages, though it was overly verbose — phases blur slightly, and risks lack detailed summaries. ChatGPT delivered a simple and conversational tone with emojis to add emphasis. But it was lightest on technical alignment with parenting. Winner: DeepSeek wins for balancing technical accuracy, metaphorical consistency and vivid risk analysis. Though Claude's poetic framing was a very close contender. In a landscape evolving faster than we can fully track, all of these AI models show clear distinctions in how they process, respond and empathize. Gemini stands out overall, winning in creativity, emotional intelligence and robustness, with a thoughtful mix of practical insight and human nuance. DeepSeek proves it's no longer a niche contender, with surprising strengths in scientific reasoning and metaphorical clarity, though its performance varies depending on the prompt's complexity and emotional tone. Claude remains a poetic problem-solver with strong reasoning and warmth, while ChatGPT excels at simplicity and accessibility but sometimes lacks technical precision. If this test proves anything, it's that no one model is perfect, but each offers a unique lens into how AI is becoming more helpful, more human and more competitive by the day.
Yahoo
an hour ago
- Yahoo
TGS Investor Presentation at the 2025 EAGE Conference
OSLO, Norway (3 June 2025) – TGS, a leading provider of energy data and intelligence, attends investor meetings at the EAGE industry conference today. The presentation the company is using includes one new slide (#8 in the presentation) showing booked positions for streamer and OBN for the next quarters. The presentation can be downloaded from or For more information, visit or contact: Bård Stenberg VP IR & Communication Mobile: +47 992 45 235 investor@ Attachment EAGE presentation


Tom's Guide
3 hours ago
- Tom's Guide
Android XR: Everything you need to know
Android XR is Google's new AI-powered platform for powering a new wave of mixed reality headsets and smart glasses. The mixed reality platform has had a slow rollout so far, but we expect the first devices this year. There aren't any devices powered by Android XR for sale yet, though Samsung is slated to be the first manufacturer out of the gate with its Project Moohan headset at some point in 2025. A new pair of smart glasses is also on the way from Xreal called Project Aura. Smart glasses and headsets are going to be a significant part of Google's future product lineup. Google will also fully integrate Gemini, its homegrown artificial intelligence, into this family of immersive devices. The Apple Vision Pro and visionOS should soon face stiff competition. So what is Android XR, and how will it shape the next generation of AI-powered headsets and smart glasses? Here's what you need to know about Google's extended reality platform and when devices will be available. Android XR is Google's new operating system for extended reality devices. It's intended for use with virtual reality (VR) and mixed reality (XR) headsets, as well as smart glasses. Android XR is part of the Android platform, which extends beyond smartphones to tablets, wearables, car dashboards and TVs. Get instant access to breaking news, the hottest reviews, great deals and helpful tips. Android XR enables developers and device makers to utilize tools such as ARCode, Android Studio, Jetpack Compose, Unity and OpenXR to create specialized apps, games and other experiences within a development environment similar to the rest of the ecosystem. Google is collaborating on the framework with key manufacturing players Samsung and Qualcomm. Google has developed versions of its suite of apps for use on the XR platform. They include favorites like Google Photos, Google Maps, Chrome, and YouTube. That's just the start of the Google-led experiences that will be available at launch. Extended reality is an umbrella term encompassing an immersive experience combining physical and digital components. The physical component is something you wear on your head or face, while the digital part refers to something like the heads-up display on a pair of smart glasses. Android XR is not Google Glass, despite Glass being the predecessor. While it is an evolution of the initial platform launched in 2013, Android XR is an extension of the broader Android platform. Its existence should help expand Android's reach beyond phones, tablets, cars and TVs. Android XR shares many similarities with Apple's visionOS on the Vision Pro, as well as Meta's extended reality offerings. Meta calls its software Horizon OS, which powers the Quest 3 and Quest 3S headsets. Android XR offers two main experiences out of the gate. The first is in the form of a visor-like headset that goes over the head. Samsung's Project Moohan is an example of that. The device uses outward-facing cameras and sensors to map the environment and projects it inward, allowing you to walk around. The headset then projects a desktop-like environment that spans the length of the headset. Place your hand in view, and Android XR will recognize it as input. Pinch and grab the various translucent windows or layer them on top of one another. You can even click out of them like on the desktop. Or, use Gemini to summon a fully immersive video experience using spatial audio. Android XR on a pair of smart glasses is a different experience. The demonstration at Google I/O 2025 showed a pair of thick, wire-frame glasses with discreet buttons on either side and a touchpad. Once the smart glasses are on, a heads-up display (HUD) is visible, positioned off to the side. Unlike Android XR on the headset, there is no desktop or main home screen. There's also no physical input or need to extend your hands out front to control anything. Instead, menu screens and information pop in as needed and only hover when active. For instance, while in navigation mode, Google Maps will display arrows pointing in the direction to walk or ride. Android XR also lets you summon Gemini Live, particularly on smart glasses. Most interaction is conducted through voice commands. You can ask Gemini contextual questions and prompt it to provide information on what you're looking at. Samsung is the first name you'll likely associate with Android XR since it was the first to be mentioned alongside Google's announcement that it would essentially 're-enter' the extended reality realm. Right now, we're waiting on Samsung's Project Moohan to make its debut (and reveal its actual name). Samsung teased in a recent earnings call that it would 'explore new products such as XR' in the second half of 2025. Xreal is another major player in the Android XR space. The company announced its Project Aura headset would be the second device launched with Android XR and that it would reveal more details at Augmented World Expo (AWE) in June 2025, with a potential product launch for later this year. That would put it around the same timeline as Samsung's headset. Other device makers like Lynx and Sony have also been mentioned as partners in the Android XR push. Qualcomm makes the Snapdragon XR2+ Gen 2 silicon, made especially for this particular product category. For smart glasses, Google is working on an in-house pair. Although there's a reference device, there's nothing available for consumers quite yet. Eyewear brands like Gentle Monster and Warby Parker have been tapped by Google to develop stylish smart glasses with Android XR, though there's no timeline available there. The first Android XR devices should be available in the second half of 2025. Based on what Samsung and Xreal have mentioned in earnings calls and press releases, they should be among the first to roll out Android XR-based products. The overall cost of Android XR headsets and smart glasses has yet to be determined. Samsung and Xreal will be the companies to set the standard pricing for the headset and glasses, respectively. Any Android XR smart glasses would have to be priced on par with the Ray-Ban Metas, which start at $300. Snap, the company behind the social media app Snapchat, has had its foray into the smart glasses space with Spectacles. The company is still refining its entry into the extended reality space. It's unclear if it would attempt to use Android XR in its product lineup. Meta has established its presence in the headset space with the Quest 3 and Quest 3S, but we're still awaiting the release of Orion AI Glasses. These are Meta's next-generation smart glasses. Like the Ray-Ban Metas, they're designed after a pair of Ray-Ban wireframes. They also have a built-in camera and open-ear audio, but the main feature is the heads-up display you can interact with, similar to the Quest headsets. Meta hasn't revealed when the Orion AI Glasses will be available. The Apple Vision Pro is the first-generation version of the company's foray into extended reality. With a starting price of $3,500, it's a pricey way to enter Apple's spatial computing ecosystem. The device boasts a similar eye and hand-tracking interface to Project Moohan. The Vision Pro also works within Apple's ecosystem of devices and can tether to the MacBook. But one of its biggest caveats is its high price. It's also quite heavy to wear. There have been reports that Apple is working on a more affordable version of the Vision Pro and a second-generation that's lighter for 2026. The company also has sights set on Apple glasses.