
Gilmour Space Technologies: Aussie-made rocket crashes after launch attempt in Bowen, Queensland
Gilmour Space Technologies conducted the first test launch of its Eris rocket, Australia's first homegrown orbital-class rocket, from Bowen Orbital Spaceport in North Queensland on Wednesday morning.
The rocket exploded after approximately 20 seconds of being in the air.
The Gold Coast company faced multiple delays leading up to the launch, mostly due to weather and technical issues, including an electrical fault triggered during final preparations.
Co-founder and chief executive Adam Gilmour previously said that the first launch is always the hardest.
'It's almost unheard of for a private rocket company to launch successfully to orbit the first time,' he said in a statement.
'Whether we make it off the pad, reach max Q, or get all the way to space, what's important is that every second of flight will deliver valuable data that will improve our rocket's reliability and performance for future launches.'
A commentator of the launch live stream can be heard repeating, 'It's going... we're going,' as the rocket blasts into the air before noting, 'it's hovering' as the rocket appears to slow down and drift sideways only seconds later.
'It's gone!' he exclaimed as the rocket crashed back down to the Earth's surface.
'Oh no, it didn't go. There wasn't sufficient thrust to actually keep it up.
'It slid straight off the pad; there wasn't sufficient thrust.'
Gilmour Space made history in March last year when its Bowen spaceport was granted the first orbital launch facility licence in Australia, and when it secured the country's first Australian Launch Permit for Eris TestFlight 1 in November.
The Nightly has reached out to Gilmour Space for comment.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles

Sydney Morning Herald
12 minutes ago
- Sydney Morning Herald
Work-from-home is for employers to decide
Should business owners lose the right to determine where their staff should work? It's a radical notion, but it's being proposed by the Victorian government. Employers have always had the right to say where their staff should be sited, but the proposal is to take away this foundational right and force employers to accept two days of work-from-home for employees who want it. Lest governments in other jurisdictions are tempted to go down this path, let's be more specific about the flaws in this proposal. I'll start by explaining how businesses maximise productivity regarding work arrangements and what we stand to lose. The critical element is that each employer has the right to set those work arrangements themselves. Having this choice allows employers to decide on work-from-home arrangements that work best for the business, depending on their individual circumstances. A travel agent in town A, for example, might have lots of work-from-home employees, and that works for them for the type of staff they're looking for and, as a result, they enjoy high productivity levels. That's great. Loading On the other hand, a consulting firm in town B might insist all staff be on deck to foster teamwork and camaraderie, which result in high productivity. That's also great. The key point here is to allow individual firms the choice of whether to have work-from-home or work-from-work arrangements, or indeed some sort of hybrid arrangement. Allowing that choice allows the town A as well as the town B business to thrive. Denying that choice would – by definition – cause one of those businesses to suffer a productivity hit. The importance of allowing individual businesses to choose is critical not only to the success of millions of businesses across the country, but also to the national economy. Some people make the mistake of making sweeping generalisations about what level of work-from-home is best for Australian businesses, but these one-size-fits-all proposals fail to account for what works best for each enterprise. They come in all shapes and sizes, with all sorts of business models, meeting all sorts of customer needs, and that is as it should be in a modern, dynamic economy.

The Age
12 minutes ago
- The Age
Work-from-home is for employers to decide
Should business owners lose the right to determine where their staff should work? It's a radical notion, but it's being proposed by the Victorian government. Employers have always had the right to say where their staff should be sited, but the proposal is to take away this foundational right and force employers to accept two days of work-from-home for employees who want it. Lest governments in other jurisdictions are tempted to go down this path, let's be more specific about the flaws in this proposal. I'll start by explaining how businesses maximise productivity regarding work arrangements and what we stand to lose. The critical element is that each employer has the right to set those work arrangements themselves. Having this choice allows employers to decide on work-from-home arrangements that work best for the business, depending on their individual circumstances. A travel agent in town A, for example, might have lots of work-from-home employees, and that works for them for the type of staff they're looking for and, as a result, they enjoy high productivity levels. That's great. Loading On the other hand, a consulting firm in town B might insist all staff be on deck to foster teamwork and camaraderie, which result in high productivity. That's also great. The key point here is to allow individual firms the choice of whether to have work-from-home or work-from-work arrangements, or indeed some sort of hybrid arrangement. Allowing that choice allows the town A as well as the town B business to thrive. Denying that choice would – by definition – cause one of those businesses to suffer a productivity hit. The importance of allowing individual businesses to choose is critical not only to the success of millions of businesses across the country, but also to the national economy. Some people make the mistake of making sweeping generalisations about what level of work-from-home is best for Australian businesses, but these one-size-fits-all proposals fail to account for what works best for each enterprise. They come in all shapes and sizes, with all sorts of business models, meeting all sorts of customer needs, and that is as it should be in a modern, dynamic economy.


The Advertiser
2 hours ago
- The Advertiser
Tech giants fail to tackle heinous crimes against kids
Tech giants are failing to track the reports of online child sexual abuse despite figures suggesting more than 16 million photos and videos were found on the platforms. An eSafety report has revealed that Apple, Google, Meta, Microsoft, Discord, WhatsApp, Snapchat and Skype aren't doing enough to crackdown on online child sexual abuse despite repeated calls for action. It comes three years after the Australian watchdog found the platforms weren't proactively detecting stored abuse material or using measures to find live-streams of child harm. "While there are a couple of bright spots, basically, most of the big ones are not lifting their game when it comes to the most heinous crimes against children," Commissioner Julie Inman Grant told ABC radio on Wednesday. The latest report revealed Apple and Google's YouTube weren't tracking the number of user reports about child sexual abuse, nor could they say how long it took to respond to the allegations. The companies also didn't provide the number of trust and safety staff. The US National Centre for Missing and Exploited Children suggests there were tip-offs about more than 18 million unique images and eight million videos of online sexual abuse in 2022. "What worries me is when companies say, 'We can't tell you how many reports we've received' ... that's bollocks, they've got the technology," Ms Inman Grant said. "What's happening is we're seeing a winding back of content moderation and trust and safety policies and an evisceration of trust and safety teams, so they're de-investing rather than re-upping." It comes as YouTube has been arguing against being included in a social media ban for under-16-year-olds on the basis that it is not a social media platform but rather is often used as an educational resource. The watchdog commissioner had recommended YouTube be included based on research that showed children were exposed to harmful content on the platform more than on any other. Meanwhile, other findings in the new report include that none of the giants had deployed tools to detect child sexual exploitation livestreaming on their services, three years after the watchdog first raised the alarm. A tool called hash matching, where copies of previously identified sexual abuse material can be detected on other platforms, wasn't being used by most of the companies, and they are failing to use resources to detect grooming or sexual extortion. There were a few positive improvements with Discord, Microsoft and WhatsApp generally increasing hash-matching tools and the number of sources to inform that technology. "While we welcome these improvements, more can and should be done," Ms Inman Grant said. This report is part of legislation passed last year that legally enforces periodic transparency notices to tech companies, meaning they must report to the watchdog every six months for two years about tackling child sexual abuse material. The second report will be available in early 2026. Tech giants are failing to track the reports of online child sexual abuse despite figures suggesting more than 16 million photos and videos were found on the platforms. An eSafety report has revealed that Apple, Google, Meta, Microsoft, Discord, WhatsApp, Snapchat and Skype aren't doing enough to crackdown on online child sexual abuse despite repeated calls for action. It comes three years after the Australian watchdog found the platforms weren't proactively detecting stored abuse material or using measures to find live-streams of child harm. "While there are a couple of bright spots, basically, most of the big ones are not lifting their game when it comes to the most heinous crimes against children," Commissioner Julie Inman Grant told ABC radio on Wednesday. The latest report revealed Apple and Google's YouTube weren't tracking the number of user reports about child sexual abuse, nor could they say how long it took to respond to the allegations. The companies also didn't provide the number of trust and safety staff. The US National Centre for Missing and Exploited Children suggests there were tip-offs about more than 18 million unique images and eight million videos of online sexual abuse in 2022. "What worries me is when companies say, 'We can't tell you how many reports we've received' ... that's bollocks, they've got the technology," Ms Inman Grant said. "What's happening is we're seeing a winding back of content moderation and trust and safety policies and an evisceration of trust and safety teams, so they're de-investing rather than re-upping." It comes as YouTube has been arguing against being included in a social media ban for under-16-year-olds on the basis that it is not a social media platform but rather is often used as an educational resource. The watchdog commissioner had recommended YouTube be included based on research that showed children were exposed to harmful content on the platform more than on any other. Meanwhile, other findings in the new report include that none of the giants had deployed tools to detect child sexual exploitation livestreaming on their services, three years after the watchdog first raised the alarm. A tool called hash matching, where copies of previously identified sexual abuse material can be detected on other platforms, wasn't being used by most of the companies, and they are failing to use resources to detect grooming or sexual extortion. There were a few positive improvements with Discord, Microsoft and WhatsApp generally increasing hash-matching tools and the number of sources to inform that technology. "While we welcome these improvements, more can and should be done," Ms Inman Grant said. This report is part of legislation passed last year that legally enforces periodic transparency notices to tech companies, meaning they must report to the watchdog every six months for two years about tackling child sexual abuse material. The second report will be available in early 2026. Tech giants are failing to track the reports of online child sexual abuse despite figures suggesting more than 16 million photos and videos were found on the platforms. An eSafety report has revealed that Apple, Google, Meta, Microsoft, Discord, WhatsApp, Snapchat and Skype aren't doing enough to crackdown on online child sexual abuse despite repeated calls for action. It comes three years after the Australian watchdog found the platforms weren't proactively detecting stored abuse material or using measures to find live-streams of child harm. "While there are a couple of bright spots, basically, most of the big ones are not lifting their game when it comes to the most heinous crimes against children," Commissioner Julie Inman Grant told ABC radio on Wednesday. The latest report revealed Apple and Google's YouTube weren't tracking the number of user reports about child sexual abuse, nor could they say how long it took to respond to the allegations. The companies also didn't provide the number of trust and safety staff. The US National Centre for Missing and Exploited Children suggests there were tip-offs about more than 18 million unique images and eight million videos of online sexual abuse in 2022. "What worries me is when companies say, 'We can't tell you how many reports we've received' ... that's bollocks, they've got the technology," Ms Inman Grant said. "What's happening is we're seeing a winding back of content moderation and trust and safety policies and an evisceration of trust and safety teams, so they're de-investing rather than re-upping." It comes as YouTube has been arguing against being included in a social media ban for under-16-year-olds on the basis that it is not a social media platform but rather is often used as an educational resource. The watchdog commissioner had recommended YouTube be included based on research that showed children were exposed to harmful content on the platform more than on any other. Meanwhile, other findings in the new report include that none of the giants had deployed tools to detect child sexual exploitation livestreaming on their services, three years after the watchdog first raised the alarm. A tool called hash matching, where copies of previously identified sexual abuse material can be detected on other platforms, wasn't being used by most of the companies, and they are failing to use resources to detect grooming or sexual extortion. There were a few positive improvements with Discord, Microsoft and WhatsApp generally increasing hash-matching tools and the number of sources to inform that technology. "While we welcome these improvements, more can and should be done," Ms Inman Grant said. This report is part of legislation passed last year that legally enforces periodic transparency notices to tech companies, meaning they must report to the watchdog every six months for two years about tackling child sexual abuse material. The second report will be available in early 2026. Tech giants are failing to track the reports of online child sexual abuse despite figures suggesting more than 16 million photos and videos were found on the platforms. An eSafety report has revealed that Apple, Google, Meta, Microsoft, Discord, WhatsApp, Snapchat and Skype aren't doing enough to crackdown on online child sexual abuse despite repeated calls for action. It comes three years after the Australian watchdog found the platforms weren't proactively detecting stored abuse material or using measures to find live-streams of child harm. "While there are a couple of bright spots, basically, most of the big ones are not lifting their game when it comes to the most heinous crimes against children," Commissioner Julie Inman Grant told ABC radio on Wednesday. The latest report revealed Apple and Google's YouTube weren't tracking the number of user reports about child sexual abuse, nor could they say how long it took to respond to the allegations. The companies also didn't provide the number of trust and safety staff. The US National Centre for Missing and Exploited Children suggests there were tip-offs about more than 18 million unique images and eight million videos of online sexual abuse in 2022. "What worries me is when companies say, 'We can't tell you how many reports we've received' ... that's bollocks, they've got the technology," Ms Inman Grant said. "What's happening is we're seeing a winding back of content moderation and trust and safety policies and an evisceration of trust and safety teams, so they're de-investing rather than re-upping." It comes as YouTube has been arguing against being included in a social media ban for under-16-year-olds on the basis that it is not a social media platform but rather is often used as an educational resource. The watchdog commissioner had recommended YouTube be included based on research that showed children were exposed to harmful content on the platform more than on any other. Meanwhile, other findings in the new report include that none of the giants had deployed tools to detect child sexual exploitation livestreaming on their services, three years after the watchdog first raised the alarm. A tool called hash matching, where copies of previously identified sexual abuse material can be detected on other platforms, wasn't being used by most of the companies, and they are failing to use resources to detect grooming or sexual extortion. There were a few positive improvements with Discord, Microsoft and WhatsApp generally increasing hash-matching tools and the number of sources to inform that technology. "While we welcome these improvements, more can and should be done," Ms Inman Grant said. This report is part of legislation passed last year that legally enforces periodic transparency notices to tech companies, meaning they must report to the watchdog every six months for two years about tackling child sexual abuse material. The second report will be available in early 2026.