Blackbox Press Event and Regional Government Roundtable
TOKYO, JP / / February 5, 2025 / From 2 p.m. to 4 p.m. on Jan. 29, Blackbox held its first in-house media event. We were aiming to start off small, so we were pleasantly surprised with the turnout - thank you to all the press and ecosystem representatives who made the time to come by.
It being our first event, we began by explaining Blackbox's mission statement and the social context that led to our launch. General Manager Taiki Iwasaki talked about the growing support for startups - especially foreign startups - in Japan over the last few years.
We are now roughly midway through previous PM Kishida's five-year Startup Development plan, and the landscape is indeed changing. Currently over 700 Startup Visas have been issued, with well over half of them maturing to full-time Business Management Visas.
And yet there remains the issue of Japan being a black box to outsiders, especially within the business sphere. Even information exclusively intended for non-Japanese - for example, the application process for a startup visa - is often only available in Japanese, and often fragmented and buried on various regional government websites.
This is what Blackbox seeks to address; via Directory pages for each featured city, users can find information about visas, government support programs, local organisations, news, and interviews with local entrepreneurs, all in English.
Startup City ProjectWe were also happy to welcome Toru Udagawa, representative of the Cabinet Office's Secretariat of Science, Technology, and Innovation Policy, who gave a presentation on the government's roadmap for further developing Japan's startup ecosystem. The startup phase is of course very exciting, but how will a healthy startup ecosystem mature into an economic engine contributing to Japan's future, and what support structures and tools need to be in place to help it do so?
Roundtable With Regional DelegatesThe highlight of the afternoon was a roundtable discussion with representatives from Shibuya, Nagoya, Kobe, Kyoto, and Sapporo - all cities within Special Economic Zones that offer startup visas.
It was fascinating to hear how the different cultural and historical backgrounds of the different areas have affected the startup scenes there, as well as their designs for the future. Nagoya, with its long history of heavy industry and manufacturing, is keen to expand into global hubs such as the U.S., Europe, and Singapore, while the port town of Kobe emphasises collaboration within Japan and the immediate area, like Taiwan and Korea. Shibuya's place as a premier world destination sees it making efforts towards attracting foreign capital and VCs, while Hokkaido's rich quality of life acts as a strong force in attracting human resources. Overall, a brilliant insight into the strengths of the different regions.
Thank YouBlackbox recently celebrated its second year of operations, and we're constantly growing and devising new ways to demystify the startup scene in Japan. If you're interested in joining our enthusiastic community, don't hesitate to check us out here.
Blackbox: https://www.blackboxjp.com/
Contact Information
Taiki Iwasaki Managementcontact@blackboxjp.com03-6407-9982
SOURCE: Blackbox
View the original press release on ACCESS Newswire
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles

Hypebeast
an hour ago
- Hypebeast
HAVEN's Summer 2025 Collection Centers on the Pacific Northwest's Terrain
Vancouver-based labelHAVENhas unveiled a coastal campaign for its Summer 2025 collection, returning with a capsule of essentials designed for the warm days and cool nights of Summer in the Pacific Northwest. Utilizing cotton-linen gabardines, Japan denim, and lightweight Super 140s wool, the label offers a range of core garments characterized by the label's functional, minimalist ethos. HAVEN opts for a light and neutral color palette inspired by the woodland coast of Vancouver, led by tones of beige, sage green, brown, off-white, and indigo. The collection is anchored by matching linen gabardine sets, comprising a work jacket that reinterprets the classic military BDU (battle dress uniform) garments. Offered in a dark brown and beige colorway, the jacket's ergonomic tailoring and underarm gussets elevate the utilitarian design with contemporary elegance. In indigo and white, Japanese linen denim shirts and trousers offer a cooling feel and soft hand. The denim Helix Pant features twisted side seams for a sculpted fit and added mobility, and functional details, like slant hand pockets, storm welt back pockets, and snap-adjustable waist tabs. HAVEN tops the collection off with wool Summer shirting, including wool versions of the aforementioned denim shirts in beige, green, and brown checks. Utilizing Super 140s wool, known for its ultra-fine quality, the collection also offers short-sleeve camp collar shirts in the same three colorways. See the gallery above for a closer look at the campaign. HAVEN's Summer 2025 Delivery 1 is out now at the brand'swebsiteand physical store.
Yahoo
3 hours ago
- Yahoo
AMD CEO Su turns heads with comments at AI event
AMD CEO Su turns heads with comments at AI event originally appeared on TheStreet. Lisa Su has seen the future and she wants to tell you all about it. The chairwoman and CEO of Advanced Micro Devices () took the stage at the chipmaker's "Advancing AI" developers conference to give the attendees an idea of what's next. 💵💰Don't miss the move: Subscribe to TheStreet's free daily newsletter 💰 "At AMD, we're really focused on pushing the boundaries of high performance and adaptive computing to help solve some of the world's most important challenges," Su said during her keynote address. "Frankly, computing has never been more important in the world. "I'm always incredibly proud to say that billions of people use AMD technology every day, whether you're talking about services like Microsoft Office 365 or Facebook or Zoom or Netflix or Uber or Salesforce or SAP and many more running on AMD infrastructure." Su said her company's latest AI processors can challenge Nvidia's () chips in a market she now expects to soar past $500 billion in the next three years, according to Bloomberg. The new installments in AMD's MI350 chip series are faster than Nvidia's counterparts and represent major gains over earlier versions, Su said at a company event Thursday in San Jose, Calif. The MI355 chips, which started shipping earlier this month, are 35 times faster than predecessors, she said. Though AMD remains a distant second to Nvidia in AI accelerators — the chips that help develop and run artificial intelligence tools — it aims to catch up with these new products. More Tech Stocks: Palantir gets great news from the Pentagon Analyst has blunt words on Trump's iPhone tariff plans OpenAI teams up with legendary Apple exec The stakes are higher than ever: Su previously predicted $500 billion in market revenue by 2028, but she now sees it topping that number. 'People used to think that $500 billion was very large number,' she said in a briefing following her presentation. 'Now it seems well within grasp.' In February, AMD's forecast for its data center business reflected slower growth than some analysts predicted. AMD says the new update to its MI range will restore momentum and prove it can go toe to toe with a much bigger rival. AMD said that the MI355 outperforms Nvidia's B200 and GB200 products when it comes to running AI software and equals or exceeds them when creating the code. Purchasers will pay significantly less than they would versus Nvidia, AMD said. Nvidia did not immediately respond to a request for comment. AMD, like Nvidia, is restricted from shipping its most powerful components to China under U.S. trade curbs. The company is lobbying hard to try to get the Trump administration to allow them to freely offer AI components to other countries. AMD shares are down 4.1% this year and off nearly 28% from a year ago. Several investment firms issued research reports following AMD's "Advancing AI" event, including Evercore ISI analyst Mark Lipacis, who raised his price target on the company to $144 from $126 and affirmed an outperform rating on the shares. The AI event indicated that AMD is making progress on the ROCm software stack as well as in penetrating hyperscalers' internal inferencing workloads, the analyst tells investors. The hyperscalers are the major providers of cloud infrastructure and services. The AMD Instinct customer list expanded from Meta Platforms () , Oracle () and Microsoft () to OpenAI, xAI, Cohere, RedHat IBM's () software subsidiary, and Humain, said Lipacis. He says that increased visibility into AMD's data-center graphics-processing units warrants a higher price-to-earnings multiple. Yahoo Finance calculates the forward p/e at just under 30 for AMD and under 34 for Capital analyst Suji Desilva raised the firm's price target on AMD to $150 from $125 following AMD's AI event and affirmed a buy rating on the shares. The analyst said he was encouraged by AMD's artificial intelligence portfolio progress across processors, AI GPUs, networking, software and rack systems. Desilva said he expected faster 2026 growth with the ramp of the MI350 accelerator-based Helios rack solution. AMD sees the addressable market growing faster than previously expected, with AI inferencing and agentic AI trending as growth drivers on top of "significant" AI training investment to date, the analyst tells investors in a research note. Citi analyst Christopher Danely maintained a neutral rating on AMD with a $120 price target after the "Advancing AI" event and the launch of its latest artificial intelligence chip, the MI355X. He noted that AMD raised the AI total addressable market and announced a new customer, xAI, but it did not provide a revenue forecast for its AI business, which would benefit the CEO Su turns heads with comments at AI event first appeared on TheStreet on Jun 15, 2025 This story was originally reported by TheStreet on Jun 15, 2025, where it first appeared. Sign in to access your portfolio


Forbes
3 hours ago
- Forbes
Multimodal AI: A Powerful Leap With Complex Trade-Offs
Artificial intelligence is evolving into a new phase that more closely resembles human perception and interaction with the world. Multimodal AI enables systems to process and generate information across various formats such as text, images, audio, and video. This advancement promises to revolutionize how businesses operate, innovate, and compete. Unlike earlier AI models, which were limited to a single data type, multimodal models are designed to integrate multiple streams of information, much like humans do. We rarely make decisions based on a single input; we listen, read, observe, and intuit. Now, machines are beginning to emulate this process. Many experts advocate for training models in a multimodal manner rather than focusing on individual media types. This leap in capability offers strategic advantages, such as more intuitive customer interactions, smarter automation, and holistic decision-making. Multimodal has already become a necessity in many simple use cases today. One example of this is the ability to comprehend presentations which have images, text and more. However, responsibility will be critical, as multimodal AI raises new questions about data integration, bias, security, and the true cost of implementation. Multimodal AI allows businesses to unify previously isolated data sources. Imagine a customer support platform that simultaneously processes a transcript, a screenshot, and a tone of voice to resolve an issue. Or consider a factory system that combines visual feeds, sensor data, and technician logs to predict equipment failures before they occur. These are not just efficiency gains; they represent new modes of value creation. In sectors like healthcare, logistics, and retail, multimodal systems can enable more accurate diagnoses, better inventory forecasting, and deeply personalized experiences. In addition, and perhaps more importantly, the ability of AI to engage with us in a multimodal way is the future. Talking to an LLM is easier than writing and then reading through responses. Imagine systems that can engage with us leveraging a combination of voice, videos, and infographics to explain concepts. This will fundamentally change how we engage with the digital ecosystem today and perhaps a big reason why many are starting to think that the AI of tomorrow will need something different than just laptops and screens. This is why leading tech firms like Google, Meta, Apple, and Microsoft are heavily investing in building native multimodal models rather than piecing together unimodal components. Despite its potential, implementing multimodal AI is complex. One of the biggest challenges is data integration, which involves more than just technical plumbing. Organizations need to feed integrated data flows into models, which is not an easy task. Consider a large organization with a wealth of enterprise data: documents, meetings, images, chats, and code. Is this information connected in a way that enables multimodal reasoning? Or think about a manufacturing plant: how can visual inspections, temperature sensors, and work orders be meaningfully fused in real time? That's not to mention the computing power multimodal AI require, which Sam Altman referenced in a viral tweet earlier this year. But success requires more than engineering; it requires clarity about which data combinations unlock real business outcomes. Without this clarity, integration efforts risk becoming costly experiments with unclear returns on investment. Multimodal systems can also amplify biases inherent in each data type. Visual datasets, such as those used in computer vision, may not equally represent all demographic groups. For example, a dataset might contain more images of people from certain ethnicities, age groups, or genders, leading to a skewed representation. Asking a LLM to generate an image of a person drawing with their left hand remains challenging – leading hypothesis is that most pictures available to train are right-handed individuals. Language data, such as text from books, articles, social media, and other sources, is created by humans who are influenced by their own social and cultural backgrounds. As a result, the language used can reflect the biases, stereotypes, and norms prevalent in those societies. When these inputs interact, the effects can compound unpredictably. A system trained on images from a narrow population may behave differently when paired with demographic metadata intended to broaden its utility. The result could be a system that appears more intelligent but is actually more brittle or biased. Business leaders must evolve their auditing and governance of AI systems to account for cross-modal risks, not just isolated flaws in training data. Additionally, multimodal systems raise the stakes for data security and privacy. Combining more data types creates a more specific and personal profile. Text alone may reveal what someone said, audio adds how they said it, and visuals show who they are. Adding biometric or behavioral data creates a detailed, persistent fingerprint. This has significant implications for customer trust, regulatory exposure, and cybersecurity strategy. Multimodal systems must be designed for resilience and accountability from the ground up, not just performance. Multimodal AI is not just a technical innovation; it represents a strategic shift that aligns artificial intelligence more closely with human cognition and real business contexts. It offers powerful new capabilities but demands a higher standard of data integration, fairness, and security. For executives, the key question is not just, "Can we build this?" but "Should we, and how?" What use case justifies the complexity? What risks are compounded when data types converge? How will success be measured, not just in performance but in trust? The promise is real, but like any frontier, it demands responsible exploration.