Latest news with #DockerHub


Business Wire
5 days ago
- Business
- Business Wire
Former Docker Engineering Leader Joins TurinTech to Help Scale Artemis - Its AI Engineering Platform for the Agentic Era
LONDON--(BUSINESS WIRE)--TurinTech, a leader in Evolutionary agentic code platforms, today announced that Michael Parker has joined as its Vice President of Engineering. A veteran in developer tooling and platform engineering, Parker brings decades of experience building scalable systems and leading global teams—including at Docker, where he helped modernize the company's cloud platform and developer experience. Parker joins TurinTech as it prepares to launch Artemis more broadly, bringing agentic AI into the heart of the developer experience—from planning to production. Built around an outcome-first approach, Artemis helps teams guide and validate AI contributions, align work to their goals, and improve code with confidence. It's a platform designed not just for faster development—but for trusted, measurable results. Mike Basios, Chief Technology Officer at TurinTech, commented: 'We're building Artemis to help teams get the most out of AI—whether that's LLMs, agents, or both. It's not about generating more code—it's about delivering measurably improved outcomes.' At Docker, Parker played a key role in the company's shift from infrastructure to developer-first tooling. He led platform modernization, scaled distributed teams, and oversaw the user experience behind Docker Hub. At TurinTech, he will oversee engineering delivery across Artemis cloud and on-prem deployments, ensuring developers can work seamlessly with AI agents, planning workflows, and outcome-based review tools. 'Agentic development is a powerful shift, but it needs structure to succeed,' said Michael Parker, VP of Engineering. 'With Artemis, we're building the planning and workflow intelligence that lets AI agents work more like real teammates. Developers stay in control, but get meaningful support—from scoping to implementation to validation. It's about tackling the real-world friction in today's GenAI tools and making AI genuinely useful in everyday engineering.' Leslie Kanthan, CEO and Co-founder of TurinTech, added: 'Demand for Artemis continues to grow since our limited launch earlier this year. Global enterprises like Intel and Taylor Wessing are already engaging, and we're seeing strong developer interest in our AI-driven engineering platform. With Michael onboard, we're excited to accelerate availability and bring the power of Artemis to more teams, faster.' Be Among the First to Try What's Next Discover what Artemis can do—and sign up to be one of the first to access our upcoming AI-powered developer experience: TurinTech builds intelligent systems that evolve and improve code and machine learning models. Its platforms, Artemis for code and evoML for ML pipelines, combine agentic planning, evolutionary algorithms, and real-time validation to deliver measurable, production-ready results. Whether optimizing GenAI output, modernizing legacy code, or tuning ML for performance, TurinTech helps teams move beyond generation to deliver software that's intelligent by design—trusted, efficient, and built to deliver the results you need with the full power of AI. To learn more, visit


Techday NZ
5 days ago
- Business
- Techday NZ
Michael Parker joins TurinTech to lead Artemis AI expansion
Michael Parker, previously of Docker, has joined TurinTech as Vice President of Engineering to oversee the scaling of the company's Artemis AI engineering platform. Appointment and background Parker brings considerable experience in developer tooling and platform engineering, having held senior roles at Docker, where he was responsible for leading modernisation of the company's cloud platform as well as improving the developer experience. His career includes building scalable systems and managing distributed engineering teams globally. At Docker, Parker was involved in steering the firm's transition from infrastructure-focused solutions to developer-first tooling, leading initiatives such as platform modernisation and overseeing the user experience behind Docker Hub. Role at TurinTech In his new post at TurinTech, Parker will be responsible for engineering delivery across both cloud and on-premises deployments of Artemis. He will focus on integrating AI agents into software development processes, overseeing planning workflows and deploying outcome-based review tools, aiming to enable developers to work seamlessly with AI technologies. TurinTech's Artemis platform is built to support the new era of agentic AI in software development, offering teams guidance, validation of AI contributions, and aligning development work with organisational goals. The platform is structured around an outcome-first approach, prioritising productivity gains that can be measured and verified. Mike Basios, Chief Technology Officer at TurinTech, commented: "We're building Artemis to help teams get the most out of AI - whether that's LLMs, agents, or both. It's not about generating more code - it's about delivering measurably improved outcomes." Parker's appointment comes as TurinTech prepares for a broader rollout of Artemis. The platform is already in use by several global enterprises, including Intel and Taylor Wessing, as part of its limited launch phase earlier this year. Addressing the challenges facing the adoption of agentic AI, Parker emphasised the importance of structured workflows in development environments reliant on AI agents. "Agentic development is a powerful shift, but it needs structure to succeed," said Michael Parker, VP of Engineering. "With Artemis, we're building the planning and workflow intelligence that lets AI agents work more like real teammates. Developers stay in control, but get meaningful support - from scoping to implementation to validation. It's about tackling the real-world friction in today's GenAI tools and making AI genuinely useful in everyday engineering." TurinTech reports growing demand for Artemis, as organisations recognise the need for platforms that not only generate code but also deliver functional, production-ready software with a clear focus on organisational outcomes. Market response Leslie Kanthan, CEO and Co-founder of TurinTech, said that interest in Artemis has expanded since its initial roll-out. He highlighted the significance of Parker's recruitment in supporting the company's ambitions to increase the platform's availability to more teams worldwide. "Demand for Artemis continues to grow since our limited launch earlier this year. Global enterprises like Intel and Taylor Wessing are already engaging, and we're seeing strong developer interest in our AI-driven engineering platform. With Michael onboard, we're excited to accelerate availability and bring the power of Artemis to more teams, faster." As part of the broader expansion, Parker has also recruited former colleagues Johnny Stoten and Diogo Ferreira, who previously held roles at Docker, to further bolster the engineering function at TurinTech. TurinTech focuses on building systems that evolve and improve both code and machine learning models. Its products, including Artemis for code and evoML for machine learning pipelines, use agentic planning, evolutionary algorithms and real-time validation to achieve results that can be measured in a production environment. The aim is to help clients move beyond basic AI generation, facilitating the deployment of software that is robust, efficient and aligned with organisational objectives.


Forbes
23-04-2025
- Business
- Forbes
Docker Brings Familiar Container Workflow To AI Models And MCP Tools
Blue Whale Docker recently announced new tools that apply container technology principles to artificial intelligence development, addressing key challenges around AI model execution and Model Context Protocol integration. The company's MCP Catalog, MCP Toolkit and Model Runner aim to standardize how developers deploy, secure and manage AI components using familiar container workflows. These tools bridge the technical gap between containerization and AI systems while providing enterprise-grade controls for organizations deploying AI at scale. The Model Context Protocol enables AI applications to interact with external tools and data sources through standardized interfaces. Developed by Anthropic and supported by major AI providers, MCP allows language models and agents to discover available tools and invoke them with appropriate parameters. However, implementing MCP servers presents several challenges, including environment conflicts, security vulnerabilities and inconsistent behavior across platforms. Docker addresses these issues through containerization. The Docker MCP Catalog, built on Docker Hub infrastructure, provides a repository of containerized MCP servers verified for security and compatibility. Developers can browse and deploy over 100 MCP servers from partners including Stripe for payment processing, Elastic for search capabilities and Neo4j for graph databases. The complementary MCP Toolkit handles authentication and secure execution. It includes built-in credential management integrated with Docker Hub accounts, allowing developers to authenticate MCP servers once and use them across multiple clients. Rather than launching MCP servers with full host access, Docker containerizes each server with appropriate permissions and isolation, significantly improving security. A typical implementation might use containerized MCP servers to provide AI systems with access to time services, database connections, Git repositories and API integrations. The Docker MCP approach ensures these tools run in isolated environments with controlled permissions, addressing the security concerns that have emerged with MCP implementations. Model Runner Simplifies Local AI Development Docker's Model Runner extends container principles to executing AI models themselves. This tool streamlines downloading, configuring and running models within Docker's familiar workflow, addressing fragmentation in AI development environments. It leverages GPU acceleration through platform-specific APIs while maintaining Docker's isolation properties. The system stores models as OCI artifacts in Docker Hub, enabling compatibility with other registries, including internal enterprise repositories. This approach improves deployment speed and reduces storage requirements compared to traditional model distribution methods. The architecture allows data to remain within an organization's infrastructure, addressing privacy concerns when working with sensitive information. Docker Model Runner does not run in a container itself but uses a host-installed inference server, currently with direct access to hardware acceleration through Apple's Metal API. This design balances performance requirements with security considerations. Industry Partnerships Strengthen Ecosystem Docker has secured partnerships with key AI ecosystem players to support both initiatives. The MCP Catalog includes integrations with popular MCP clients, including Claude, Cursor, VS Code and For Model Runner, Docker partnered with Google, Continue, Dagger, Qualcomm Technologies, HuggingFace, Spring AI and VMware Tanzu AI Solutions to give developers access to the latest models and frameworks. These collaborations position Docker as a neutral platform provider in the competitive AI infrastructure space. Several vendors, including Cloudflare, Stytch and Okta subsidiary Auth0 have released identity and access management support for MCP. What distinguishes Docker's approach is the application of container principles to isolate MCP servers, providing security boundaries that address vulnerabilities researchers have identified. Enterprise Considerations and Strategic Impact For technology leaders, Docker's AI strategy offers several advantages. Development teams can maintain consistency between AI components and traditional applications using familiar Docker commands. The containerized approach simplifies deployment across environments from development workstations to production infrastructure. Security teams benefit from isolation properties that mitigate risks when connecting AI systems to enterprise resources. Docker's extension of container workflows to AI development addresses a critical gap in enterprise toolchains. By applying established containerization principles to emerging AI technologies, the company provides organizations a path to standardize practices across traditional and AI-powered applications. As models become integral to production systems, this unified approach to development, deployment and security may prove valuable for maintaining operational efficiency while addressing the unique requirements of AI systems.