
Rierino launches AI agent builder to power agents with full system awareness
Rierino, the next-generation low-code platform for enterprise innovation, announced today the launch of AI Agent Builder —a new capability designed to help organizations build and deploy intelligent agents that operate inside real systems, not just across conversations.
Unlike traditional approaches that focus on prompts or pre-scripted flows, Rierino's AI Agent Builder allows teams to give agents secure access to backend logic, real-time workflows, and internal APIs—enabling actions like creating a purchase request, retrieving customer history, or triggering multi-step automation based on enterprise data.
'The missing piece in AI agent development isn't more intelligence. It's more structure,' said Berkin Ozmen, Co-Founder and CTO of Rierino. 'AI agents will transform the enterprise by executing real actions, governed by real logic—where business value is actually created. That requires infrastructure purpose-built for execution, not just conversation.'
A Foundation for Enterprise-Grade Agents
AI Agent Builder is not a standalone feature, but a natural extension of Rierino's composable, low-code platform. With it, developers can transform any internal logic into agent-accessible capabilities governed by platform-level RBAC, validation rules, audit trails, and contextual schema definitions.
Agents can invoke saga flows, Rierino's real-time, event-driven orchestration components, as native tools with clearly defined inputs and outputs. These flows eliminate the need for custom glue code or fragile integrations and make structured actions accessible to large language models (LLMs) by design.
The platform supports integration with a wide range of LLM providers, including OpenAI, Google Gemini, Amazon Bedrock, Mistral, Anthropic, and on-prem deployments like Ollama or LocalAI—giving enterprises full flexibility over how and where their AI workloads run.
Agents built with Rierino are also channel-agnostic by default. They can be accessed through Rierino's UI, exposed as APIs, or triggered by external events—enabling seamless deployment across chat interfaces, operational systems, or custom frontends.
And because all logic is built using Rierino's microservice-based foundation, agent capabilities are modular, versioned, and reusable across teams and systems—ensuring long-term maintainability and scalability as business needs evolve.
From Prototypes to Production-Grade Agents
Most AI agent platforms today are optimized for experimentation—focused on prototyping flows, generating responses, or showing basic integrations. While that's helpful in the early stages, it falls short in real-world enterprise scenarios where agents must operate across multiple systems, comply with business policies, and deliver measurable outcomes.
Rierino's AI Agent Builder is built for the next phase: production-grade deployment. It enables teams to move beyond pilots and proof-of-concepts by equipping agents with structured tools, secure runtime environments, and composable business logic. Agents aren't just asked to generate ideas—they're expected to pull real-time data, initiate multi-step workflows, and act within enterprise guardrails.
This shift—from conversation to execution—is what turns AI from a novelty into a force multiplier for productivity, automation, and innovation at scale.
Not Just a Tool—An Agent Infrastructure Layer
While many platforms position agents as digital assistants or conversational layers, Rierino takes a fundamentally different approach: Agents are infrastructure-level components that should be embedded, orchestrated, and governed like any other part of a modern enterprise system.
AI Agent Builder is not a new direction—it's the natural evolution of Rierino's long-standing AI focus. As the first low-code platform to offer embedded AI capabilities dating back to 2020, Rierino has consistently pushed beyond surface-level automation. The 2023 launch of RAI, its embedded GenAI assistant, extended these capabilities into content, translation, and UI generation. AI Agent Builder now extends that same architectural depth to autonomous, action-driven agents.
With Rierino, every workflow, API, or rule-based decision can be exposed as a tool an agent can invoke—governed, automatically versioned, and monitored for safe execution. This turns your internal architecture into an AI-ready surface where agents can operate with full trust and transparency.
For organizations looking to scale AI safely and meaningfully, this isn't just another feature—it's a platform-level capability ensuring agents to evolve as systems grow, maintain compliance as policies shift, and deliver real business impact without introducing chaos or risk.
Rierino AI Agent Builder is now available to enterprise teams looking to bring scalable AI execution into their digital ecosystems.
About Rierino
Rierino is a next-generation technology company helping organizations accelerate digital transformation through low-code development, composable architecture, and embedded intelligence. Its platform empowers teams to create scalable microservices, orchestrate business logic, and build intelligent applications—without black-box constraints. Rierino is backed by the Future Impact Fund and was named one of Fast Company's Top 100 Startups to Watch.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Khaleej Times
7 hours ago
- Khaleej Times
Big Tech on a quest for ideal AI device as legacy gizmos seem outdated
ChatGPT-maker OpenAI has enlisted the legendary designer behind the iPhone to create an irresistible gadget for using generative artificial intelligence (AI). The ability to engage digital assistants as easily as speaking with friends is being built into eyewear, speakers, computers and smartphones, but some argue that the Age of AI calls for a transformational new gizmo. "The products that we're using to deliver and connect us to unimaginable technology are decades old," former Apple chief design officer Jony Ive said when his alliance with OpenAI was announced. "It's just common sense to at least think, surely there's something beyond these legacy products." Sharing no details, OpenAI chief executive Sam Altman said that a prototype Ive shared with him "is the coolest piece of technology that the world will have ever seen." According to several US media outlets, the device won't have a screen, nor will it be worn like a watch or broach. Kyle Li, a professor at The New School, said that since AI is not yet integrated into people's lives, there is room for a new product tailored to its use. The type of device won't be as important as whether the AI innovators like OpenAI make "pro-human" choices when building the software that will power them, said Rob Howard of consulting firm Innovating with AI Learning from flops The industry is well aware of the spectacular failure of the AI Pin, a square gadget worn like a badge packed with AI features but gone from the market less than a year after its debut in 2024 due to a dearth of buyers. The AI Pin marketed by startup Humane to incredible buzz was priced at $699. Now, Meta and OpenAI are making "big bets" on AI-infused hardware, according to CCS Insight analyst Ben Wood. OpenAI made a multi-billion-dollar deal to bring Ive's startup into the fold. Google announced early this year it is working on mixed-reality glasses with AI smarts, while Amazon continues to ramp up Alexa digital assistant capabilities in its Echo speakers and displays. Apple is being cautious embracing generative AI, slowly integrating it into iPhones even as rivals race ahead with the technology. Plans to soup up its Siri chatbot with generative AI have been indefinitely delayed. The quest for creating an AI interface that people love "is something Apple should have jumped on a long time ago," said Futurum research director Olivier Blanchard. Time to talk Blanchard envisions some kind of hub that lets users tap into AI, most likely by speaking to it and without being connected to the internet. "You can't push it all out in the cloud," Blanchard said, citing concerns about reliability, security, cost, and harm to the environment due to energy demand. "There is not enough energy in the world to do this, so we need to find local solutions," he added. Howard expects a fierce battle over what will be the must-have personal device for AI, since the number of things someone is willing to wear is limited and "people can feel overwhelmed." A new piece of hardware devoted to AI isn't the obvious solution, but OpenAI has the funding and the talent to deliver, according to Julien Codorniou, a partner at venture capital firm 20VC and a former Facebook executive. OpenAI recently hired former Facebook executive and Instacart chief Fidji Simo as head of applications, and her job will be to help answer the hardware question. Voice is expected by many to be a primary way people command AI. Google chief Sundar Pichai has long expressed a vision of "ambient computing" in which technology blends invisibly into the world, waiting to be called upon. "There's no longer any reason to type or touch if you can speak instead," Blanchard said. "Generative AI wants to be increasingly human" so spoken dialogues with the technology "make sense," he added. However, smartphones are too embedded in people's lives to be snubbed any time soon, said Wood. © Agence France-Presse


Arabian Post
11 hours ago
- Arabian Post
AMD Stakes Future on Open AI Infrastructure
Advanced Micro Devices projected bold expectations for its artificial intelligence trajectory during its Advancing AI event in San Jose on 12 June 2025, emphasising system-level openness and ecosystem collaboration. CEO Dr Lisa Su unveiled the Instinct MI350 accelerator series, introduced plans for the Helios rack-scale AI server launching in 2026, and fortified AMD's software stack to challenge incumbent leaders in the sector. Top-tier AI customers including OpenAI, Meta, Microsoft, Oracle, xAI and Crusoe pledged significant investments. OpenAI's CEO Sam Altman joined Su onstage, confirming the firm's shift to MI400-class chips and collaboration on MI450 design. Crusoe disclosed a $400 million commitment to the platform. MI350 Series, which includes the MI350X and MI355X, are shipping to hyperscalers now, with a sharp generational performance leap — delivering about four times the compute capacity of prior-generation chips, paired with 288 GB of HBM3e memory and up to 40% better token‑per‑dollar performance than Nvidia's B200 models. Initial deployments are expected in Q3 2025 in both air‑ and liquid‑cooled configurations, with racks supporting up to 128 GPUs, producing some 2.6 exaflops FP4 compute. ADVERTISEMENT Looking further ahead, AMD previewed 'Helios'—a fully integrated rack comprising MI400 GPUs, Zen 6‑based EPYC 'Venice' CPUs and Pensando Vulcano NICs, boasting 72 GPUs per rack, up to 50% more HBM memory bandwidth and system‑scale networking improvements compared to current architectures. Helios is poised for market launch in 2026, with an even more advanced MI500‑based variant expected around 2027. Dr Su underscored openness as AMD's competitive lever. Unlike Nvidia's proprietary NVLink interface, AMD's designs will adhere to open industry standards—extending availability of networking architectures to rivals such as Intel. Su argued this approach would accelerate innovation, citing historical parallels from open Linux and Android ecosystems. On the software front, the ROCm 7 stack is being upgraded with enterprise AI and MLOps features, including integrated tools from VMware, Red Hat, Canonical and others. ROCm Enterprise AI, launching in Q3 or early Q4, aims to match or exceed Nvidia's CUDA-based offerings in usability and integration. Strategic acquisitions underpin AMD's infrastructure ambitions. The purchase of ZT Systems in March 2025 brought over 1,000 engineers to accelerate rack-scale system builds. Meanwhile, AMD has onboarded engineering talent from Untether AI and Lamini to enrich its AI software capabilities. Market reaction was muted; AMD shares fell roughly 1–2% on the event day, with analysts noting that while the announcements are ambitious, immediate market share gains are uncertain. Financially, AMD projects AI data centre revenues growing from over $5 billion in 2024 to tens of billions annually, anticipating the AI chip market reaching around $500 billion by 2028. These developments position AMD as a serious contender in the AI infrastructure arena. Its push for rack‑scale systems and open‑standard platforms aligns with the growing trend toward modular, interoperable computing. Competition with Nvidia will intensify through 2026 and 2027, centred on performance per dollar in large‑scale deployments.


Arabian Post
2 days ago
- Arabian Post
AI Browser Agents Mark New Era with H Company Launch
Runner H, Surfer H and Tester H, three autonomous AI agents developed by Paris-based H Company, make web-native task automation available across consumer and enterprise settings. The framework integrates advanced vision–language models that perceive browser interfaces and predict actions like clicking buttons or filling text fields, executing them via a headless browser. Runner H acts as the orchestrator. It receives natural‑language instructions, composes workflows using specialised sub‑agents and interacts with platforms such as Google Drive, Slack and Notion. In corporate pilots across France and abroad, Runner H has already begun generating revenue, even as the company offers a limited-edge free version for individual users. Surfer H is a browsing‑focused agent powered by an open‑sourced model called Holo‑1. Built on Alibaba Cloud's Qwen with H Company's enhancements, Holo‑1 records a 92.2 % success rate on the WebVoyager benchmarking suite—surpassing competitors including Google's Project Mariner and OpenAI's Operator—while reportedly cutting per‑query costs by over five‑fold. ADVERTISEMENT Tester H is tailored for web automation and testing, converting English-language prompts into scripted interactions like form‑filling or button‑clicking. The tool rides on the back of the company's 2024 acquisition of Mithril Security, integrating security‑focused mechanisms while aiming to simplify QA workflows for engineers. Benchmarks show Runner H outperforming market alternatives. On WebVoyager, its VLM–LLM pipeline yielded a 67 % task‑completion rate—exceeding Emergence AgentE at 61 % and Anthropic's Computer Use agent at 52 %. The backbone of Runner H comprises a 3‑billion‑parameter H‑VLM for interpreting GUI elements and a family of internal LLMs optimised for decision‑making and code generation. H Company introduced these agents alongside its Studio platform in March. The Studio provides a unified interface for designing, editing and running web automations, with self‑healing UI selectors and workflow version control. The platform currently operates in a private beta, yet H Company envisions expanding access to developers and broader audiences soon. Founded by former DeepMind researchers and headquartered in Paris, H Company has grown to a team of around 70, with an office in London and plans to open in the United States. Its vision is to redefine AI—from dialogue assistants to action‑driven agents capable of autonomously executing tasks across software and services. AI‑powered automation is gaining momentum. Open-source browser agents such as OpenAI's Operator, Anthropic's Claude Computer Use, and various community initiatives are rapidly evolving. Notably, H Company is contributing to that momentum by publishing research on its Vision‑Language and Large Language Models, and by releasing Holo‑1 under an open‑source licence, accelerating accessibility for developers. H Company is refining capabilities via reinforcement learning, agent debugging tools, memory, planning modules and community support. Their research emphasises modularity and cost‑efficiency—showing VLMs can outperform large generalist models in task‑grounding while operating at a fraction of the size and serve‑time cost.