logo
Negotiations continue on day 3 of residential construction strike

Negotiations continue on day 3 of residential construction strike

CTV News4 days ago

The Alliance syndicale de la construction and the Association des professionnels de la construction et de l'habitation du Québec (APCHQ) are set to continue talks Friday as the residential construction strike enters its third day.
The APCHQ and the Alliance syndicale met on Thursday afternoon in the presence of a mediator assigned to the case.
Although no agreement was reached on renewing the workers' collective agreement, which expired on April 30, the two parties agreed to continue discussions.
Unlike in other sectors, pay rises negotiated in the construction industry are not retroactive to the expiry date of the previous collective agreement.
Three other sectors of the construction industry — civil engineering/roads, institutional/commercial, and industrial — settled their agreements before they expired.
The wage increases are eight per cent for 2025, five per cent for 2026, five per cent for 2027 and four per cent for 2028.
The Alliance syndicale brings together all the construction unions, representing 200,000 workers.
Around 60,000 of them work in residential construction.
– This report by The Canadian Press was first published in French on May 30, 2024.

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Yukon wildfire fighters battle blazes across the prairies
Yukon wildfire fighters battle blazes across the prairies

CBC

timean hour ago

  • CBC

Yukon wildfire fighters battle blazes across the prairies

Yukon wildfire fighters are off to Alberta and Saskatchewan to help fight a major wildfire outbreak. "We've got a low fire danger rating here" said Haley Ritchie, an information officer with Yukon Wildland Fire. "So it's a good time for us to be able to help out." Both provinces have seen massive evacuations due to wildfires and a number of rural communities have also declared states of emergency. Five attack crews and an agency representative went to Alberta. The 21 people are accompanied by an L188 Electra heavy air tanker for aerial firefighting, along with the plane's two pilots and a dedicated mechanic. "They're such specialized aircraft that they come with their own personnel, who travel with the plane," said Ritchie. She said Yukon Wildfland Fire does not own the air tanker, but contracts it out for the season. A spokesperson for the Alberta government said Yukon crews are battling the Swan complex fire near Slave Lake. It's the hardest hit part of Alberta, where more than 88,000 hectares have already burned. The Canadian Interagency Forest Fire Centre puts out the request for wildfire fighters, Ritchie said. Once they arrive within the province or territory, the local agency coordinates who goes where. Meanwhile, a division supervisor and trainee from the Yukon are also on their way to Saskatchewan to help out with personnel logistics there. "Hopefully in the future, we'll be able to get support when we need it too," said Ritchie. She said if the fire situation were to change in the Yukon, crews could be pulled back within 24 hours. A cool and damp spring has meant a slow start to wildfire season in the Yukon, where just four hectares have burned so far. Personnel are allowed to be deployed for 19 days in total, including travel time.

Will AI go rogue? Noted researcher Yoshua Bengio launches venture to keep it safe
Will AI go rogue? Noted researcher Yoshua Bengio launches venture to keep it safe

Globe and Mail

timean hour ago

  • Globe and Mail

Will AI go rogue? Noted researcher Yoshua Bengio launches venture to keep it safe

Famed Canadian artificial-intelligence researcher Yoshua Bengio is launching a non-profit organization backed by close to US$30-million in philanthropic funding to develop safe AI systems that cannot deceive or harm humans, and to find ways to ensure that humanity remains in control of the powerful technology. The Turing Award winner, whose work helped pave the way for today's generative AI technologies, already holds multiple titles. He is a professor at the Université de Montréal, the scientific adviser at the Mila - Quebec Artificial Intelligence Institute and recently chaired the first international report on AI safety. His new venture will operate differently. 'This is more like what a company would do to solve a particular problem. It's much more top-down and mission-oriented,' he said. The non-profit is called LawZero, a reference to science fiction writer Isaac Asimov's Three Laws of Robotics, which stipulate that intelligent machines may not harm human beings. 'I hope I'm wrong': Why some experts see doom in AI LawZero, based in Montreal, will develop a concept called Scientist AI, which Prof. Bengio and his colleagues outlined in a paper earlier this year. In short, it is an AI system that will not have the negative traits found in today's large language models and chatbots, such as sycophancy, overconfidence and deception. Instead, the system would answer questions, prioritize honesty and help unlock new insights to aid in scientific discovery. The system can also be used to develop a tool that will keep AI agents, which can plan and complete tasks on their own, from going rogue. 'The plan is to build an AI that will help to manage the risks and control AIs that are not trusted. Right now, we don't know how to build agents that are trustworthy,' he said. The tool, which he hopes will be adopted by companies, would act as a gatekeeper to reject actions from AI systems that could be harmful. The plan is to build a prototype in the next 18 to 24 months. AI agents are fairly rudimentary today. They can browse the web, fill out forms, analyze data and use other applications. AI companies are making these tools smarter to take over more complex tasks, however, ostensibly to make our lives easier. Some AI experts argue that the risk grows the more powerful these tools become, especially if they are integrated into critical infrastructure systems or used for military purposes without adequate human oversight. AI agents can misinterpret instructions and achieve goals in harmful or unexpected ways, which is called the alignment problem. Editorial: A real reform mandate for the first federal AI minister Researchers at AI company Hugging Face Inc. recently argued against developing autonomous agents. 'We find no clear benefit of fully autonomous AI agents, but many foreseeable harms from ceding full human control,' they wrote, pointing to an incident in 1980 when computer systems mistakenly warned of an impending Soviet missile attack. Human verification revealed the error. Prof. Bengio also highlighted recent research that shows that popular AI models are capable of scheming, deceiving and hiding their true objectives when pushed to pursue a goal at all costs. 'When they get much better at strategizing and planning, that increases the chances of loss of control accidents, which could be disastrous,' he said. Around 15 people are working with LawZero, and Prof. Bengio intends to bring on more by offering salaries competitive with corporate AI labs, which would be impossible in academia, he said. The non-profit setting is ideal for this kind of work because it is free of the pressure to maximize profit over safety, too. 'The leading companies are, unfortunately, in this competitive race,' he said. The project has been incubated at Mila and has received funding from Skype co-founder Jaan Tallinn, along with the Future of Life Institute, Schmidt Sciences and Open Philanthropy, organizations concerned about the potential risks posed by AI. After the release of ChatGPT in late 2022, many AI researchers, including Prof. Bengio and Geoffrey Hinton, began speaking up about the profound dangers posed by superintelligent AI systems, which some experts believe to be closer to reality than originally thought. The potential downsides of AI ran the gamut from biased decision-making, turbocharged disinformation campaigns, a concentration of corporate and geopolitical power, bad actors using the technology to develop bioweapons, mass unemployment and the disempowerment of humanity at-large. None of these outcomes are a given, and these topics are hotly debated. Experts such as Prof. Bengio who focus on what other researchers see as far-off and outlandish concerns have been branded as 'doomers.' Some governments took these warnings seriously, with the United Kingdom organizing major international summits about AI safety and regulation. But the conversation has swung heavily in the other direction toward rapid AI development and adoption to capture the economic benefits. U.S. Vice-President JD Vance set the tone in February with a speech at an AI conference in France. 'The AI future is not going to be won by hand-wringing about safety. It will be won by building,' he said. Prof. Bengio, among the more vigorous hand-wringers, was in the audience for that speech. He laughed when asked what he was thinking that day but answered more generally. 'I wish that the current White House had a better understanding of the objective data that we've seen over the last five years, and especially in the last six months, which really triggers red flags and the need for wisdom and caution,' he said.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store