logo

Payments orchestrator Br-dge expands coverage with AstroPay

Finextra10-06-2025
BR-DGE has announced the latest global alternative payment method (APM) supported on its platform, through a strategic partnership with AstroPay, the global digital wallet service.
0
AstroPay is now integrated with BR-DGE's platform, making it available as an alternative payment method (APM) for BR-DGE's merchant customers across sectors like ecommerce, travel, gaming, and digital goods. The partnership represents a meaningful step in AstroPay's growth, expanding its reach through BR-DGE's modular infrastructure, broad distribution network, and strong presence in high-demand verticals.
As an independently-owned, vendor-agnostic payment orchestrator, BR-DGE is fast becoming the go-to partner for high-volume enterprise merchants through its modular solutions. A category leader in payment orchestration, BR-DGE enables merchants to scale, enter new markets, introduce innovative features, optimise costs, and effortlessly adapt to the future payment's world.
AstroPay multicurrency wallet users enjoy a range of financial benefits. Individuals and businesses can receive and withdraw international payments from global clients and platforms with ease, eliminating common transaction hurdles. With multi-currency storage and conversion capabilities, travelers and digital nomads can efficiently manage their money while benefiting from competitive exchange rates.
AstroPay also allows users to make cross-border transactions effortlessly while benefiting from best-in-class foreign exchange rates, ensuring they get the most value for their money while spending abroad. Users also experience faster and lower-cost transactions, avoiding the high fees and slow processing times that often accompany traditional banking methods.
Diego Steinberg, COO of AstroPay, commented on the partnership: 'In global, digital world, people expect to be able to send money across borders as easily as they do within their own country. We are delighted that BR-DGE has selected our digital wallet and payment platform to expand its payments offering. BR-DGE is a trusted and industry-leading payment orchestration platform, and through our partnership they can continue to help numerous enterprise merchants to enhance their tech stacks and customer payment processes.'
Bringing together over 400 payment technology solutions in one place, BR-DGE works as an independent, trusted and vendor-agnostic partner for the whole ecosystem, creating value and driving better outcomes for payment providers, their merchants and consumers. Founded in Edinburgh in 2018, BR-DGE now processes millions of transactions monthly on behalf of high-volume customers. With a number of large brands already on the platform, BR-DGE is positioned to become the orchestration partner of choice and distribution outlet for payments connectivity, routing, and data services.
Tom Voaden, VP Commercial at BR-DGE, added: 'BR-DGE is solving the disconnect between ecosystem players by working as an anchor point and single source of payment orchestration. AstroPay is a fast-growing and increasingly important payment method for millions of people around the world and is now available to our merchant and payment providers. We're fully committed to the continued expansion of our payments acceptance network and range of available payment methods, to ensure our customers can stay ahead of consumer demand. Adding AstroPay is the next step in that journey.'
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Using Generative AI for therapy might feel like a lifeline – but there's danger in seeking certainty in a chatbot
Using Generative AI for therapy might feel like a lifeline – but there's danger in seeking certainty in a chatbot

The Guardian

time2 hours ago

  • The Guardian

Using Generative AI for therapy might feel like a lifeline – but there's danger in seeking certainty in a chatbot

Tran* sat across from me, phone in hand, scrolling. 'I just wanted to make sure I didn't say the wrong thing,' he explained, referring to a recent disagreement with his partner. 'So I asked ChatGPT what I should say.' He read the chatbot-generated message aloud. It was articulate, logical and composed – almost too composed. It didn't sound like Tran. And it definitely didn't sound like someone in the middle of a complex, emotional conversation about the future of a long-term relationship. It also did not mention anywhere some of Tran's contributing behaviours to the relationship strain that Tran and I had been discussing. Like many others I've seen in therapy recently, Tran had turned to AI in a moment of crisis. Under immense pressure at work and facing uncertainty in his relationship, he'd downloaded ChatGPT on his phone 'just to try it out'. What began as a curiosity soon became a daily habit, asking questions, drafting texts, and even seeking reassurance about his own feelings. The more Tran used it, the more he began to second-guess himself in social situations, turning to the model for guidance before responding to colleagues or loved ones. He felt strangely comforted, like 'no one knew me better'. His partner, on the other hand, began to feel like she was talking to someone else entirely. ChatGPT and other generative AI models present a tempting accessory, or even alternative, to traditional therapy. They're often free, available 24/7 and can offer customised, detailed responses in real time. When you're overwhelmed, sleepless and desperate to make sense of a messy situation, typing a few sentences into a chatbot and getting back what feels like sage advice can be very appealing. But as a psychologist, I'm growing increasingly concerned about what I'm seeing in the clinic; a silent shift in how people are processing distress and a growing reliance on artificial intelligence in place of human connection and therapeutic support. AI might feel like a lifeline when services are overstretched – and make no mistake, services are overstretched. Globally, in 2019 one in eight people were living with a mental illness and we face a dire shortage of trained mental health professionals. In Australia, there has been a growing mental health workforce shortage that is impacting access to trained professionals. Clinician time is one of the scarcest resources in healthcare. It's understandable (even expected) that people are looking for alternatives. Turning to a chatbot for emotional support isn't without risk however, especially when the lines between advice, reassurance and emotional dependence become blurred. Many psychologists, myself included, now encourage clients to build boundaries around their use of ChatGPT and similar tools. Its seductive 'always-on' availability and friendly tone can unintentionally reinforce unhelpful behaviours, especially for people with anxiety, OCD or trauma-related issues. Reassurance-seeking, for example, is a key feature in OCD and ChatGPT, by design, provides reassurance in abundance. It never asks why you're asking again. It never challenges avoidance. It never says, 'let's sit with this feeling for a moment, and practice the skills we have been working on'. Tran often reworded prompts until the model gave him an answer that 'felt right'. But this constant tailoring meant he wasn't just seeking clarity; he was outsourcing emotional processing. Instead of learning to tolerate distress or explore nuance, he sought AI-generated certainty. Over time, that made it harder for him to trust his own instincts. Beyond psychological concerns, there are real ethical issues. Information shared with ChatGPT isn't protected by the same confidentiality standards as registered Ahpra professionals. Although OpenAI states that data from users is not used to train its models unless permission is given, the sheer volume of fine print in user agreements often goes unread. Users may not realise how their inputs can be stored, analysed and potentially reused. There's also the risk of harmful or false information. These large language models are autoregressive; they predict the next word based on previous patterns. This probabilistic process can lead to 'hallucinations', confident, polished answers that are completely untrue. AI also reflects the biases embedded in its training data. Research shows that generative models can perpetuate and even amplify gender, racial and disability-based stereotypes – not intentionally, but unavoidably. Human therapists also possess clinical skills; we notice when a client's voice trembles, or when their silence might say more than words. This isn't to say AI can't have a place. Like many technological advancements before it, generative AI is here to stay. It may offer useful summaries, psycho-educational content or even support in regions where access to mental health professionals is severely limited. But it must be used carefully, and never as a replacement for relational, regulated care. Tran wasn't wrong to seek help. His instincts to make sense of distress and to communicate more thoughtfully were logical. However, leaning so heavily on to AI meant that his skill development suffered. His partner began noticing a strange detachment in his messages. 'It just didn't sound like you', she later told him. It turned out: it wasn't. She also became frustrated about the lack of accountability in his correspondence to her and this caused more relational friction and communication issues between them. As Tran and I worked together in therapy, we explored what led him to seek certainty in a chatbot. We unpacked his fears of disappointing others, his discomfort with emotional conflict and his belief that perfect words might prevent pain. Over time, he began writing his own responses, sometimes messy, sometimes unsure, but authentically his. Good therapy is relational. It thrives on imperfection, nuance and slow discovery. It involves pattern recognition, accountability and the kind of discomfort that leads to lasting change. A therapist doesn't just answer; they ask and they challenge. They hold space, offer reflection and walk with you, while also offering up an uncomfortable mirror. For Tran, the shift wasn't just about limiting his use of ChatGPT; it was about reclaiming his own voice. In the end, he didn't need a perfect response. He needed to believe that he could navigate life's messiness with curiosity, courage and care – not perfect scripts. Name and identifying details changed to protect client confidentiality Carly Dober is a psychologist living and working in Naarm/Melbourne In Australia, support is available at Beyond Blue on 1300 22 4636, Lifeline on 13 11 14, and at MensLine on 1300 789 978. In the UK, the charity Mind is available on 0300 123 3393 and Childline on 0800 1111. In the US, call or text Mental Health America at 988 or chat

Using Generative AI for therapy might feel like a lifeline – but there's danger in seeking certainty in a chatbot
Using Generative AI for therapy might feel like a lifeline – but there's danger in seeking certainty in a chatbot

The Guardian

time6 hours ago

  • The Guardian

Using Generative AI for therapy might feel like a lifeline – but there's danger in seeking certainty in a chatbot

Tran* sat across from me, phone in hand, scrolling. 'I just wanted to make sure I didn't say the wrong thing,' he explained, referring to a recent disagreement with his partner. 'So I asked ChatGPT what I should say.' He read the chatbot-generated message aloud. It was articulate, logical and composed – almost too composed. It didn't sound like Tran. And it definitely didn't sound like someone in the middle of a complex, emotional conversation about the future of a long-term relationship. It also did not mention anywhere some of Tran's contributing behaviours to the relationship strain that Tran and I had been discussing. Like many others I've seen in therapy recently, Tran had turned to AI in a moment of crisis. Under immense pressure at work and facing uncertainty in his relationship, he'd downloaded ChatGPT on his phone 'just to try it out'. What began as a curiosity soon became a daily habit, asking questions, drafting texts, and even seeking reassurance about his own feelings. The more Tran used it, the more he began to second-guess himself in social situations, turning to the model for guidance before responding to colleagues or loved ones. He felt strangely comforted, like 'no one knew me better'. His partner, on the other hand, began to feel like she was talking to someone else entirely. ChatGPT and other generative AI models present a tempting accessory, or even alternative, to traditional therapy. They're often free, available 24/7 and can offer customised, detailed responses in real time. When you're overwhelmed, sleepless and desperate to make sense of a messy situation, typing a few sentences into a chatbot and getting back what feels like sage advice can be very appealing. But as a psychologist, I'm growing increasingly concerned about what I'm seeing in the clinic; a silent shift in how people are processing distress and a growing reliance on artificial intelligence in place of human connection and therapeutic support. AI might feel like a lifeline when services are overstretched – and make no mistake, services are overstretched. Globally, in 2019 one in eight people were living with a mental illness and we face a dire shortage of trained mental health professionals. In Australia, there has been a growing mental health workforce shortage that is impacting access to trained professionals. Clinician time is one of the scarcest resources in healthcare. It's understandable (even expected) that people are looking for alternatives. Turning to a chatbot for emotional support isn't without risk however, especially when the lines between advice, reassurance and emotional dependence become blurred. Many psychologists, myself included, now encourage clients to build boundaries around their use of ChatGPT and similar tools. Its seductive 'always-on' availability and friendly tone can unintentionally reinforce unhelpful behaviours, especially for people with anxiety, OCD or trauma-related issues. Reassurance-seeking, for example, is a key feature in OCD and ChatGPT, by design, provides reassurance in abundance. It never asks why you're asking again. It never challenges avoidance. It never says, 'let's sit with this feeling for a moment, and practice the skills we have been working on'. Tran often reworded prompts until the model gave him an answer that 'felt right'. But this constant tailoring meant he wasn't just seeking clarity; he was outsourcing emotional processing. Instead of learning to tolerate distress or explore nuance, he sought AI-generated certainty. Over time, that made it harder for him to trust his own instincts. Beyond psychological concerns, there are real ethical issues. Information shared with ChatGPT isn't protected by the same confidentiality standards as registered Ahpra professionals. Although OpenAI states that data from users is not used to train its models unless permission is given, the sheer volume of fine print in user agreements often goes unread. Users may not realise how their inputs can be stored, analysed and potentially reused. There's also the risk of harmful or false information. These large language models are autoregressive; they predict the next word based on previous patterns. This probabilistic process can lead to 'hallucinations', confident, polished answers that are completely untrue. AI also reflects the biases embedded in its training data. Research shows that generative models can perpetuate and even amplify gender, racial and disability-based stereotypes – not intentionally, but unavoidably. Human therapists also possess clinical skills; we notice when a client's voice trembles, or when their silence might say more than words. This isn't to say AI can't have a place. Like many technological advancements before it, generative AI is here to stay. It may offer useful summaries, psycho-educational content or even support in regions where access to mental health professionals is severely limited. But it must be used carefully, and never as a replacement for relational, regulated care. Tran wasn't wrong to seek help. His instincts to make sense of distress and to communicate more thoughtfully were logical. However, leaning so heavily on to AI meant that his skill development suffered. His partner began noticing a strange detachment in his messages. 'It just didn't sound like you', she later told him. It turned out: it wasn't. She also became frustrated about the lack of accountability in his correspondence to her and this caused more relational friction and communication issues between them. As Tran and I worked together in therapy, we explored what led him to seek certainty in a chatbot. We unpacked his fears of disappointing others, his discomfort with emotional conflict and his belief that perfect words might prevent pain. Over time, he began writing his own responses, sometimes messy, sometimes unsure, but authentically his. Good therapy is relational. It thrives on imperfection, nuance and slow discovery. It involves pattern recognition, accountability and the kind of discomfort that leads to lasting change. A therapist doesn't just answer; they ask and they challenge. They hold space, offer reflection and walk with you, while also offering up an uncomfortable mirror. For Tran, the shift wasn't just about limiting his use of ChatGPT; it was about reclaiming his own voice. In the end, he didn't need a perfect response. He needed to believe that he could navigate life's messiness with curiosity, courage and care – not perfect scripts. Name and identifying details changed to protect client confidentiality Carly Dober is a psychologist living and working in Naarm/Melbourne In Australia, support is available at Beyond Blue on 1300 22 4636, Lifeline on 13 11 14, and at MensLine on 1300 789 978. In the UK, the charity Mind is available on 0300 123 3393 and Childline on 0800 1111. In the US, call or text Mental Health America at 988 or chat

Gemini Deep Think : Solving Complex Applications in Math and Beyond
Gemini Deep Think : Solving Complex Applications in Math and Beyond

Geeky Gadgets

time7 hours ago

  • Geeky Gadgets

Gemini Deep Think : Solving Complex Applications in Math and Beyond

What if a machine could think as deeply as a human mathematician, solving problems so intricate they stump even the brightest minds? Enter Gemini Deep Think, an advanced AI model that has not only redefined what artificial intelligence can achieve but also challenged our understanding of reasoning itself. With its new performance at the International Mathematical Olympiad (IMO)—a stage traditionally dominated by human brilliance—this AI has proven it can rival the sharpest intellects in tackling complex algebra, geometry, and number theory. Yet, this achievement raises a pressing question: can such computational power ever balance its brilliance with real-world practicality? In this overview Sam Witteveen explores how Gemini Deep Think is reshaping the boundaries of AI reasoning, from its innovative use of parallel reasoning chains to its potential applications in fields like 3D modeling and algorithm design. But this isn't just a story of triumph; it's also one of trade-offs. While the model's ability to solve intricate problems with precision is unparalleled, its high computational demands and extended processing times reveal the challenges of scaling such technology. As we delve deeper, you'll discover not only the promise of this AI marvel but also the hurdles it must overcome to truly transform industries and redefine intelligence itself. What does this mean for the future of human and machine collaboration? Let's explore. Gemini Deep Think Overview The International Mathematical Olympiad (IMO) is widely regarded as one of the most prestigious global competitions, challenging high school students to solve intricate problems in algebra, geometry, and number theory. For the first time in history, an AI model—Gemini Deep Think—has matched the performance of top human participants, scoring an impressive 35 out of 42 points. This achievement is a testament to the model's ability to engage in logical problem-solving and advanced mathematical reasoning, areas traditionally dominated by human intelligence. By excelling in such a rigorous competition, Gemini Deep Think has not only proven its technical capabilities but also highlighted the potential for AI to complement human expertise in solving complex problems. This milestone reflects a significant step forward in AI's evolution, showcasing its capacity to operate in domains that require deep analytical thinking. How Gemini Deep Think Pushes AI Boundaries Gemini Deep Think represents a significant advancement in AI reasoning by introducing innovative methodologies that set it apart from earlier models. One of its most notable features is the use of parallel reasoning chains, which allow the model to evaluate multiple solutions simultaneously and select the most effective one. This capability enables it to excel in tasks such as solving complex algebraic equations, generating structured outputs like 3D models, and addressing intricate coding challenges. The model's advanced reasoning capabilities, however, come with a trade-off. Solving complex problems can take between 10 to 20 minutes, reflecting its substantial computational demands. While this processing time underscores the sophistication of its algorithms, it also highlights the need for optimization to improve efficiency. The balance between computational power and practical usability remains a key area for development as AI continues to evolve. Gemini Deep Think Challenges Human Brilliance Watch this video on YouTube. Check out more relevant guides from our extensive collection on advanced AI models that you might find useful. Advancing Beyond Previous AI Models Gemini Deep Think builds upon and surpasses the capabilities of its predecessors, such as AlphaProof and AlphaGeometry. Unlike these earlier models, which relied heavily on specialized mathematical languages like Lean, Gemini Deep Think processes problems directly, offering greater flexibility and adaptability. This advancement allows it to handle a broader range of tasks, from solving mathematical benchmarks to tackling logical reasoning challenges across diverse domains. Despite its superior performance, the model's computational intensity presents a significant limitation. Its extended processing times make it less practical for applications where speed is critical, such as real-time decision-making or dynamic problem-solving environments. Addressing these limitations will be essential for making sure its broader applicability and integration into various industries. Potential Applications and Current Limitations The versatility of Gemini Deep Think opens up a wide range of potential applications across multiple fields. Some of the most promising use cases include: Generating structured outputs for industries like 3D modeling, animation, and game development. Solving complex mathematical benchmarks with a high degree of accuracy, aiding academic research and education. Enhancing logical reasoning in specialized domains such as coding, algorithm design, and software development. However, the model's limitations cannot be ignored. Its long processing times and high computational requirements pose challenges for industries that rely on rapid decision-making or real-time solutions. These constraints highlight the need for further refinement to make the model more practical and accessible for real-world applications. Without addressing these issues, its adoption may remain limited to niche areas where processing time is less critical. Future Directions and Integration As AI technology continues to advance, Gemini Deep Think is poised for broader integration into platforms such as AI Studio and Google Cloud. Through API access, developers could use its reasoning capabilities for specialized applications, ranging from academic research to industrial problem-solving. This integration would enable organizations to harness the model's advanced capabilities in a more streamlined and accessible manner. The future of AI development, however, depends on addressing key trade-offs. Balancing intelligence, speed, and cost will be essential to making models like Gemini Deep Think scalable and efficient. Achieving this balance will determine the extent to which such technologies can be adopted across diverse industries, from education and healthcare to engineering and finance. By refining its computational efficiency and reducing processing times, Gemini Deep Think could unlock new possibilities for AI applications. Its ability to perform high-level reasoning tasks with remarkable accuracy positions it as a valuable tool for solving some of the most complex challenges in science, technology, and beyond. Shaping the Future of Artificial Intelligence Gemini Deep Think represents a significant milestone in the evolution of artificial intelligence, showcasing its ability to perform advanced reasoning tasks with precision and accuracy. Its performance at the IMO underscores the potential of AI to rival human intelligence in domains that demand deep analytical thinking. However, the model's computational demands and extended processing times highlight areas that require improvement to ensure its practicality and scalability. As the field of AI continues to evolve, the focus will remain on optimizing efficiency, usability, and accessibility. By addressing these challenges, models like Gemini Deep Think could pave the way for fantastic advancements across a wide range of industries, shaping the future of artificial intelligence and its role in solving the world's most complex problems. Media Credit: Sam Witteveen Filed Under: AI, Top News Latest Geeky Gadgets Deals Disclosure: Some of our articles include affiliate links. If you buy something through one of these links, Geeky Gadgets may earn an affiliate commission. Learn about our Disclosure Policy.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store