
Jean-Pierre Azéma, 87, Dies; Chronicled French Collaboration With Nazis
His death, in a hospice, was announced by the university where he taught for more than 35 years, the Institut d'Études Politiques, popularly known as Sciences Po.
With a series of dispassionate, carefully researched books beginning in the 1970s, Mr. Azéma became part of a group of younger historians who helped destroy the postwar myths that France had comforted itself with: that the collaborationist wartime Vichy regime had done what it could to resist the occupying Germans and to protect the French, and that its leader, Marshal Philippe Pétain, was essentially benevolent.
Mr. Azéma was having none of it.
'A phony regime' is what he called Pétain's government in his best-known work, 'De Munich à la Libération, 1938-1944' (1979, and translated in 1984 as 'From Munich to the Liberation'). He condemned the government for its 'sententious moralism and anti-democratic élitism' and its 'defensive and inward-looking nationalism.'
Vichy was 'basically authoritarian,' Mr. Azéma wrote, a careful judgment not then universally accepted. He became known for picking apart Vichy's various factions — from the believers in Pétain's cult to the opportunists, and from those who believed in the marshal's project of a 'National Revolution' to those who were pro-Nazi.
In France, Mr. Azéma's book outsold even the groundbreaking work of his friend the Columbia historian Robert O. Paxton, 'Vichy France,' which Mr. Azéma's mother, Claude Bertrand, had translated into French six years before and which was the first to set off the revisionist tide.
Want all of The Times? Subscribe.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Bloomberg
15 minutes ago
- Bloomberg
Ukraine Anti-Corruption Officials Raided by Security Services
Ukraine's anti-graft office said on Monday some of its investigators had been raided by the security service and officials from the prosecutor-general's office. The searches are being executed without court orders, the National Anti-Corruption Bureau said on Telegram.
Yahoo
an hour ago
- Yahoo
Ukraine calls for talks with Russia next week
Ukrainian President Volodymyr Zelensky has called for talks with Russia next week to push forward ceasefire talks amid growing US pressure on Moscow to reach a deal. Ukraine's Secretary of the National Security and Defense Council of Ukraine, Rustem Umerov, proposed the a meeting with the Russian side for next week, Zelensky said during his daily address on Saturday. 'The dynamics of the negotiations must improve. We need to do everything possible to achieve a ceasefire. The Russian side must stop avoiding decisions regarding prisoner exchanges, the return of children, and the cessation of killings,' Zelensky said. 'A meeting at the leadership level is essential to genuinely secure peace. Ukraine is ready for such a meeting,' he added. Russian state media outlet TASS reported that a source close to Russia's negotiating team confirmed that they had received Kyiv's proposal for a meeting. Kremlin spokesperson Dmitry Peskov told Russian media on Sunday that Russia is 'ready to move quickly' on achieving a peace deal with Ukraine, but its 'main goal' was to achieve its 'objectives.' Russian President Vladimir Putin 'has repeatedly expressed his desire to transition the Ukrainian settlement process into a peaceful track as soon as possible. It's a lengthy process that requires effort, and it's not simple. And most likely, this is increasingly being understood in Washington,' Peskov told Russian state media journalist Pavel Zarubin. The last round of ceasefire talks in Istanbul ended swiftly in early June, with Russian and Ukrainian delegates meeting for barely over an hour before calling it quits. According to Russian state media, Russia put forward maximalist territorial demands as part of their preconditions for a ceasefire. Ukraine has previously refused to consider any territorial concessions in exchange for peace. Zelensky's call for talks arrives just after US President Donald Trump offered Putin a 50-day window to achieve a ceasefire before the US implements high tariffs on Russian goods, alongside 'secondary tariffs' on goods from countries that purchase Russian oil. 'We're going to be doing very severe tariffs if we don't have a deal in 50 days,' Trump said during a meeting with NATO Secretary General Mark Rutte in the Oval Office earlier this week. 'I use trade for a lot of things,' Trump added. 'But it's great for settling wars.' Trump has expressed increasing frustration with Russian President Vladimir Putin in recent weeks, even complaining that Putin's assurances about ceasefire progress are 'bullshit.' Yet Western analysts and Ukrainian officials say that the president's 50-day-window is unlikely to deter Putin from accelerating Russia's summer offensive in the coming weeks. Moreover, Russian Foreign Minister Sergey Lavrov has dismissed Trump's threatened tariffs as mere bluster. 'Fifty days – it used to be 24 hours,' Lavrov said. 'It used to be 100 days; we've been through all of this.' Daria Tarasova-Markina contributed reporting.


Forbes
an hour ago
- Forbes
From Hitler's Bunker To AI Boardrooms: Why Moral Courage Matters
black numbers (figures) 1944 on a marble slab. Eighty-one years ago today, Colonel Claus von Stauffenberg walked into Adolf Hitler's Wolf's Lair bunker with a briefcase containing enough explosives to change the course of history. The assassination attempt failed, but Stauffenberg's courage in the face of overwhelming evil offers puzzling lessons for our current moment — particularly as we navigate the transformative power of artificial intelligence. The parallels are uncomfortable, and useful to examine. Then, as now, individual acts of moral courage were essential to preserving human agency in the face of systems that seemed beyond individual control. High-ranking German officials recognized what many contemporaries refused to see: that passive compliance with destructive systems was itself a moral choice. Today, AI systems are being deployed across society at new speed, often without adequate consideration of their long-term implications. Many of us assume that someone else — tech companies, governments, international bodies — will ensure AI serves human flourishing. This assumption is dangerous. AI development is not a natural phenomenon happening to us; it is a series of human choices that requires active human agency, not passive acceptance. The Necessity Of Hybrid Intelligence Stauffenberg and his conspirators understood that opposing tyranny required more than good intentions — it demanded strategic thinking, careful planning, and the ability to work within existing systems while fundamentally challenging them. They needed what we might today call hybrid intelligence: combining human moral reasoning with systematic analysis and coordinated action. The biggest performance improvements come when humans and smart machines work together, enhancing each other's strengths. This principle applies not just to productivity but to the fundamental challenge of keeping AI aligned with human values. We cannot simply delegate AI governance to technologists any more than the German resistance could delegate their moral choices to military hierarchies. Consider practical examples of where hybrid intelligence is essential today: Double Literacy: The Foundation Of Agency The German resistance succeeded in part because its members possessed both military expertise and moral clarity. They could operate effectively within existing power structures while maintaining independent judgment about right and wrong. Today's equivalent is double literacy — combining algorithmic literacy with human literacy. Algorithmic literacy means understanding AI's capabilities and constraints — how machine learning systems are trained, what data they use, and where they typically fail. Human literacy encompasses our understanding of aspirations, emotions, thoughts, and sensations across scales — from individuals to communities, countries, and the planet. Leaders don't need to become programmers, but they need both forms of literacy to deploy AI effectively and ethically. Practical double literacy looks like: Every Small Action Matters Stauffenberg and other members of the conspiracy were arrested and executed on the same day. The immediate failure of the July 20 plot might suggest that individual actions are meaningless against overwhelming systemic forces. But this interpretation misses the deeper impact of moral courage. The resistance's willingness to act, even against impossible odds, preserved human dignity in the darkest possible circumstances. It demonstrated that systems of oppression require human compliance to function, and that individual refusal to comply — however small — matters morally and strategically. Similarly, in the AI age, every decision to maintain human agency in the face of algorithmic convenience is significant. When a teacher insists on personally reviewing AI-generated lesson plans rather than using them blindly, when a manager refuses to outsource hiring decisions entirely to screening algorithms, when a citizen demands transparency in algorithmic decision-making by local government — these actions preserve human agency in small but crucial ways. The key is recognizing that these are not merely personal preferences but civic responsibilities. Just as the German resistance understood their actions in terms of duty to future generations, we must understand our choices about AI as fundamentally political acts that will shape the society we leave behind. Practical Takeaway: The A-Frame For Civil Courage Drawing from both Stauffenberg's example and current research on human-AI collaboration, here is a practical framework for exercising civil courage in our hybrid world: Awareness: Develop technical literacy about AI systems you encounter. Ask questions like: Who trained this system? What data was used? What are its documented limitations? How are errors detected and corrected? Stay informed about AI developments through credible sources rather than relying on marketing materials or sensationalized reporting. Appreciation: Recognize both the genuine benefits and the real risks of AI systems. Avoid both uncritical enthusiasm and reflexive opposition. Understand that the question is not whether AI is good or bad, but how to ensure human values guide its development and deployment. Appreciate the complexity of these challenges while maintaining confidence in human agency. Acceptance: Accept responsibility for active engagement rather than passive consumption. This means moving beyond complaints about "what they are doing with AI" to focus on "what we can do to shape AI." Accept that perfect solutions are not required for meaningful action — incremental progress in maintaining human agency is valuable. Accountability: Take concrete action within your sphere of influence. If you're a parent, engage meaningfully with how AI is used in your children's education. If you're an employee, participate actively in discussions about AI tools in your workplace rather than simply adapting to whatever is implemented. If you're a citizen, contact representatives about AI regulation and vote for candidates who demonstrate serious engagement with these issues. For professionals working directly with AI systems, accountability means insisting on transparency and human oversight. For everyone else, it means refusing to treat AI as a force of nature and instead recognizing it as a set of human choices that can be influenced by sustained civic engagement. The lesson of July 20, 1944, is not that individual action always succeeds in its immediate goals, but that it always matters morally and often matters practically in ways we cannot foresee. Stauffenberg's briefcase bomb failed to kill Hitler, but the example of the German resistance helped shape post-war democratic institutions and continues to inspire moral courage today. As we face the challenge of ensuring AI serves human flourishing rather than undermining it, we need the same combination of technical competence and moral clarity that characterized the July 20 conspirators. The systems we build and accept today will shape the world for generations. Like Stauffenberg, we have a choice: to act with courage in defense of human dignity, or to remain passive in the face of forces that seem beyond our control but are, ultimately, the product of human decisions. The future of AI is not predetermined. It will be shaped by the choices we make — each of us, in small acts of courage, every day.