Latest news with #perception


CBC
08-08-2025
- General
- CBC
How is Trump's fight with Canada affecting you?
Cross Country Checkup ·SUNDAY ON CHECKUP Plus, how do generational stereotypes affect how you're seen and heard? Call us at 1-888-416-8333


Forbes
04-08-2025
- Business
- Forbes
How Misalignment Begins In The Mind—And What Leaders Can Do
Simin Cai is the President and CEO at Go!Foton. Has there ever been a moment when you felt judged? It could have been by a family member, a colleague or a supervisor. Anyone. Most people would probably say yes. But then I would follow up and ask: At that exact moment you felt judged, weren't you also judging? And here's something important I've learned: People judge all the time. It's part of how we navigate the world. So don't feel bad about being judged. And don't feel bad about judging, either. What matters isn't the act itself but what we do with the awareness of it. That's the deeper problem I run into as a leader. The instant you feel judged, you're also judging the other person. It's simultaneous. And that mutual judgment, often unconscious, is exactly where misalignment starts. It's not just in relationships but also in leadership. The judgment is not in what's said but in how it's perceived. I've been thinking about this a lot—not because I have the answer, but because I'm still practicing. Still trying. Still failing more than succeeding. But this idea keeps coming back: Most leadership frameworks assume shared perception, and I don't think that's the way forward. Perception isn't shared. It's individual. That realization is entirely changing how I see leadership. A Perception Gap, Not A Process Problem The assumption of shared perception is a trap. We talk about alignment like it's a mechanical process: Get the data, set the goals, communicate the plan. But what if everyone is hearing a different version of the same plan? What I've been practicing is understanding that perception is not collective. It's individual. Even in the same company, looking at the same facts, 10 people might experience 10 different realities. I call this the gap between 'image space' and 'object space.' The object space is the external world, facts, events and decisions. The image space is how an individual interprets those facts. The correlation between the two? That's where all the challenges and opportunities lie. We often think misalignment is a communication issue, but communication happens after interpretation. The real misalignment happens silently, when people interpret the same situation differently and then act on those interpretations without realizing the disparity. This is why I say this isn't just a leadership problem. It happens between any two people. It happens in families. It happens when someone feels judged. Because the moment you feel judged, you're also judging the other person. And when that mutual judgment spirals, it creates disconnection. So yes, leadership misalignment often stems from assuming uniform understanding. But the deeper issue is that we don't map individual interpretations at all. From Assumptions To One-To-One Correlations The overlap between two people's internal worlds is never 100%—that's just a fact. But that doesn't mean we stop trying. So what does it look like to lead with this awareness? First, it means identifying how each person relates to the same set of circumstances. That's not a one-size-fits-all exercise. It requires pre-alignment, asking questions and imagining how the other person might interpret the same situation. For example, when I ask someone to recall a moment they felt judged, I'm not just digging into emotion. I'm asking them to connect to their interpretation, because that's where their truth lies. And when they realize that they were judging too, it opens a door. A door to mutual understanding. People judge all the time. It's part of how we navigate the world. So don't feel bad about being judged. And don't feel bad about judging, either. What matters isn't the act itself, but what we do with the awareness of it. Leadership, then, becomes a system of one-to-one correlations. Not broad assumptions. Here's the challenge: If your company has 10 people, maybe you can have those one-on-one conversations. But if you have 10,000? It's not scalable in the same way. The traditional way is hierarchy, internal comms, town halls and HR processes. But that loses so much context. You're not reaching individuals at that point. You're broadcasting assumptions. This is where I see the potential for something new. Individualized Systems For Individual Perceptions I used to say this issue was AI-agnostic, and technically, the perception problem exists with or without AI. But with the right tools, I believe AI can help solve it—at scale. The idea isn't just about automation. It's about building systems that acknowledge individualized perception. Systems that use big data, surveys and behavior patterns to detect how each person correlates their image space with object space. And if possible, systems that can then communicate back to that individual, based on their unique interpretation. We're not there yet, but I'm imagining it. What if we could use AI to interact with 10,000 employees as individuals, not segments? What if we could find patterns of correlation and misalignment and adapt leadership messaging accordingly? Even in user behavior, this principle applies. Behavior is not universal. It's individualized by experience. AI, if designed with this philosophy, could help bridge that gap. But the technology must respect the primacy of the individual's viewpoint. That's where most models fail. They're trained on collective data. They optimize for generalization. But the value, real value, lies in the individual lens. Focusing On Individualized Correlationism I've said it before: This is a challenge I'm still taking on. I haven't succeeded yet. I've had more failures than successes. But it resonates with me because it's real. Leadership is not about solving everything. It's about improving the correlation, person to person and thought to thought, for others to see your vision and be willing to work together toward it. That's why this idea of individualized correlationism is more than a theory. It's a practice: a practice of pre-alignment, of reflection and of mapping perception instead of assuming it. Maybe that's all we can do. Not solve, but evolve. Not unify, but correlate. Because the image space will always be different from the object space. The closer we get to understanding that gap, the better our decisions, our teams and our outcomes will be. Forbes Business Council is the foremost growth and networking organization for business owners and leaders. Do I qualify?
Yahoo
15-07-2025
- Automotive
- Yahoo
Sparsh CCTV, Innoviz Technologies and Cron AI partner to offer Security and Intelligent Transport System (ITS) Solution Across India
The collaboration aims to deploy LiDAR-camera perception nodes for perimeter and area monitoring across India unlocking real-time, edge-native intelligence for critical infrastructure NEW DELHI, July 15, 2025 /PRNewswire/ -- Sparsh CCTV, one of India's leading manufacturers of smart cameras and integrated surveillance solutions with multiple government and enterprise clients in India, Innoviz Technologies Ltd. (NASDAQ: INVZ), a global automotive leader in LiDAR solutions, and Cron AI, a leading deep learning 3D perception provider, announced today the collaboration to offer an integrated LiDAR-camera-vision perception platform, purpose-built for large-scale deployment across transport, perimeter security, railways and critical infrastructure. The announcement comes at a time when India's perimeter security sector is valued at USD 3.23 billion (2022) and projected to grow at a CAGR of 14% through 2030¹. The intelligent transport market, currently worth USD 1.44 billion (2024), is expected to more than double by 2033². Meanwhile, railway modernization and infrastructure investments exceed USD 1.3 billion³, driven by national programs like DFCCIL and Kavach. Together, these sectors represent over USD 1 billion⁴ in addressable opportunity for edge-native LiDAR-camera systems — with massive scaling potential through 2030. With a footprint across 120+ cities, a network of 2,000+ partners and an annual production capacity of 500,000+ security devices, Sparsh will lead go-to-market efforts, support integration and explore local manufacturing for the unified sensing platform. The edge-native perception nodes — combining high-performance LiDAR, deep-learning perception, and vision analytics — are expected to be a part of security infrastructure upgrade project which is planned to go live at critical infrastructure across India. These systems are intended to deliver real-time insights into vehicle movement, pedestrian safety and perimeter activity — all processed on the edge, without reliance on cloud compute or high-power infrastructure. "This partnership marks a major leap in how India can deploy high-performance, real-time sensing at scale," said Sanjeev Sehgal, Founder and MD of Sparsh CCTV. "By integrating global best-in-class LiDAR and 3D perception into our portfolio, we're equipping our customers across cities, industries, and government with intelligent systems that are built for Indian conditions and ready to scale." The integrated platform takes into account the complexities of Indian outdoor environments — heat, dust, glare, fog, and electrical instability and delivers continuous, low-latency awareness at the edge. InnovizSMART LiDAR delivers long-range, high-resolution, uniform 3D coverage, built to perform in harsh weather and lighting conditions Cron AI's senseEDGE processes every point in the LiDAR cloud using deep learning — not clustering — for greater accuracy, richer context, and consistent performance. It runs at just 8 watts, with no GPU or external infrastructure Sparsh's vision systems contribute intelligent classification, layered redundancy, and real-time visual analytics — enhancing overall system reliability Together, they create a true sensor fusion platform, where 3D point cloud, video vision and edge-native AI combine to deliver next-gen situational awareness, deployable in the near future. "India's infrastructure needs intelligence that's fast, reliable, and frictionless," said Tushar Chhabra, Founder and CEO - Cron AI. "We built senseEDGE for exactly this kind of real-world complexity. With Sparsh and Innoviz, we are making 3D perception truly scalable across India's cities and critical infrastructure." "Innoviz is proud to bring our rugged, automotive-grade LiDAR technology to one of the world's fastest growing infrastructure markets," said Omer Keilaf, CEO and Co-founder - Innoviz Technologies. "Paired with Cron AI's powerful edge-native perception and Sparsh's national reach, we are going to enable real-time intelligence for India's transport, security, and smart city initiatives." From intersections, railways to critical infrastructure and perimeters, this is the future of sensing in India — real-time, intelligent, and built for deployment at scale. About Sparsh CCTV Sparsh is India's first STQC-certified CCTV camera brand and a leader in vision-based security. With operations in 120+ cities, exports to 10+ countries, and a partner network of 2,000+, Sparsh manufactures and deploys over half a million devices annually across sectors like law enforcement, transport, and critical infrastructure. About Cron AI Cron AI is a deep tech company transforming 3D perception for real-world infrastructure. Its platform, senseEDGE, uses deep learning to interpret every point in a LiDAR cloud — delivering accurate, robust insights on just 8 watts of edge compute. Offered via a Data-as-a-Service (DaaS) model, Cron AI makes real-time 3D intelligence easy to deploy and scale. About Innoviz Technologies Innoviz is a global leader in LiDAR technology, serving top-tier automotive and industrial partners. Its solid-state LiDAR sensors and software meet the highest standards for safety, performance, and reliability — and are trusted worldwide across transport, defense, and autonomous systems. Footnotes & Sources Grand View Research – India Perimeter Security Market Outlook Astute Analytica via GlobeNewswire – India ITS Market Forecast See: Indian Railways CapEx Budget, DFCCIL Project, Kavach – PIB Cron AI internal market sizing estimate based on infrastructure deployment potential and edge native perception opportunity Photo: View original content to download multimedia: Error while retrieving data Sign in to access your portfolio Error while retrieving data Error while retrieving data Error while retrieving data Error while retrieving data
Yahoo
11-07-2025
- Science
- Yahoo
How does your brain know something is real?
It's a foggy day in New York City. Through my computer screen, I believe I'm seeing and hearing neuroscientist Nadine Dijkstra. But how can I be sure? 'We are as much creating our perception of reality as we are perceiving it,' she explains. Rather than rudely asking her to perform a battery of tests to prove she exists, I trust my eyes and ears. It's an efficient way to live. While neuroscientists might argue the details, most agree that perception—essentially, how we process sensory information to create a coherent experience—involves the active construction of a reality, as opposed to the passive reception of the world around us. For instance, when you see a busy road, you're actively creating that reality, combining information from your senses (the sights and sounds of whooshing cars ) with past experiences (knowing you've walked along this popular boulevard before). Quickly understanding that the automobiles speeding down the street are real helps keep you safe. This model for experiencing reality is efficient, but not foolproof: Sometimes our brain still gets things wrong. That dissonance is something Dijkstra, who works as the principal investigator at the Imagine Reality Lab at University College London, examines in her latest study, recently published in Neuron. Much of Dijkstra's work is inspired by the groundbreaking psychologist Mary Cheves West Perky. In a seminal 1910 paper on imagination and perception, Perky asked subjects to visualize objects—a red tomato, a green leaf, etc.—on a blank wall. Secretly, in that seemingly empty space, Perky projected barely visible images of those same objects on the wall. The subjects were none the wiser, attributing the perceived objects to their imaginations instead of the projections. It appeared, Perky mused, that 'the image of imagination must have much in common with the perception of everyday life.' More than a century later, many researchers concur, believing that imagination and perception work together to create our sense of reality. But how does our brain know what's real and what's not? Dijkstra's new research may have the answer. 'We expected the results to be more complicated and nuanced,' says Dijkstra. Instead, brain activity measured by functional magnetic resonance imaging (fMRI) scans told Dijkstra a clear story: the level of activity in the fusiform gyrus could predict whether or not someone believed an image was real. The region, located on both sides of the brain behind the temples, plays an important role in recognizing faces and objects, but its ability to potentially sort out real from fake is something neuroscientists weren't aware of before. The study was a modern twist on Perky's experiment. Instead of projecting fruit and other objects on a wall, Dijkstra and her colleagues asked participants to imagine sets of diagonal lines on a screen. Those lines were then projected into the fMRI machine via a mirror. (Using simple shapes, like diagonal lines, made it easier to predict what subjects might visualize. Ask people to imagine a leaf, and they might envision a plethora of shapes and colors.) The diagonal lines were displayed against a noisy background—think TV static—to make it more difficult to distinguish reality from imagination. When someone saw real projected lines, activity in the fusiform gyrus was stronger than when they knew they were simply imagining the diagonal lines. At the front of the brain, the anterior insula of the prefrontal cortex, which acts as a kind of hub between brain networks, also showed increased activity when participants saw the projected lines. However, when someone confused imagined lines for real ones, essentially having a mild hallucination, both the fusiform gyrus and anterior insula regions lit up—as if they'd seen the real thing. These results led Dijkstra and her team to conclude that imagined and perceived signals combine to create a 'reality signal.' If strong enough, that signal crosses a 'reality threshold' and we accept what we perceive as an objective reality. While she believes activity in the fusiform gyrus determines whether something passes the reality threshold, Dijkstra said her research was in its early stages. It could be 'the other way around,' she notes, with activity in the prefrontal cortex deciding 'whether something is real or not based on some other signal' and then feeding that 'back to the fusiform gyrus to boost your experience or make things feel more vivid.' How the reality threshold is passed matters. Proving a causal link between activity in the fusiform gyrus and hallucinations, for example, might allow medical practitioners in the future to stimulate that part of the brain to treat symptoms of schizophrenia and other brain disorders. Not only can this research shed light on why humans see things that don't exist, but it can also explain why we sometimes don't believe our eyes. When she first moved to London from the Netherlands, Dijkstra saw a creature in the distance while walking in her neighborhood. She assumed it was a dog, even though it was wandering alone. 'I was really surprised. I was like, 'Where's the owner?' I really saw a dog.' If she had turned away and not questioned her reality, she might not have realized what she was actually seeing was a fox, one of the 10,000 or so that called her new city home. Dijkstra perceived something that didn't match her past experiences and, for a moment, saw something that didn't exist. As for the future of her research, there are so many unanswered questions about perception, says Dijkstra, such as whether people with vivid imaginations are more likely to hallucinate. In this field, it's important to consistently challenge what you believe is real. 'You can have this really cool idea that makes a lot of sense and it seems to be explaining so many things, and then it turns out to be totally wrong,' she says. 'And that's OK, we still make progress.'


Daily Mail
11-07-2025
- Entertainment
- Daily Mail
Can YOU see two animals? This photo of a kookaburra has a second creature hidden in it - so, can you spot it?
From colour-changing fire trucks to 'The Dress', many optical illusions have baffled the internet over the years. But this latest illusion might be one of the strangest yet. Dr Dean Jackson, a biologist and BBC presenter, has shared an unusual image on TikTok that contains two hidden animals. At the start of the video, Dr Jackson presents a picture of a kookaburra sitting on a log. However, he then reveals that there is actually a second animal hidden somewhere in the picture that only a few keen-eyed viewers can spot. Dr Jackson describes this as an 'experiment on reframing and reimagining based on a prior image.' In the video, he says: 'A kookaburra perched in a tree, I want to know how quickly you can reframe what you've just seen when we move on to another picture. 'Lots of people who haven't seen the first picture before see a very different animal here.' @beatonthebeeb An experiment on reframing and reimagining based on a prior image. #mindgame #perception #opticalillusions #opticalillusion #weirdscience ♬ original sound - Dean Jackson If you still can't see the second animal once the image has zoomed in, Dr Jackson offers a helpful hint. He says: 'The animal that they see is way bigger than a kookaburra and it most definitely cannot fly.' As a final bit of assistance, Dr Jackson adds an image of some grass where the second animal's mouth should be. After all that, you should be able to see the goat's head emerging from the kookaburra. Markings on the back of the bird's head take on the appearance of a mouth while the beak becomes the goat's ear. On TikTok, users rushed to the comments to share their amazement over the bizarre optical illusion. One commenter wrote: 'Wow, completely freaked me out. Absolutely amazing. I thought what goat?' Another chimed in: 'So, could see the goat but I still knew it was a bird. But when the video started again, I saw a bird with a goat's head. Thanks for the nightmare fuel, I guess.' However, if you struggled to see the hidden goat until it was pointed out, you weren't alone. 'I didn't spot it till about 10 seconds after you added the grass. I work with goats as well,' one commenter wrote. 'I couldn't see it till you added the grass,' added another. While one social media user complained: 'What goat, I could only see the bird.' This illusion works because our brains are primed to recognise patterns in the world around us. Dr Susan Wardle, a psychologist at the National Institutes of Health, told MailOnline: 'The human eye receives noisy, dynamic patterns of light, and it is the human brain that interprets these patterns of light into the meaningful visual experience of objects and scenes that we see.' Usually, our brains get this right, but sometimes mistakes arise in a phenomenon scientists call pareidolia. Pareidolia is the perception of meaningful patterns in inanimate objects or otherwise random information. In humans' evolutionary past, this habit might have conveyed a survival advantage since it helped us spot friends or potential threats. The downside is that our brains tend to tell us that there are faces or patterns even when there aren't any to be found. This is the reason why people often spot Jesus looking out from a piece of burnt toast or see the Virgin Mary in a cloud. In this illusion, your brain's natural pattern-spotting tendencies kick in and impose the image of a goat over the random patterns in the kookaburra's feathers. And, once you've seen it, the image can be difficult to get out of your head. WHAT IS THE CAFÉ WALL OPTICAL ILLUSION? The café wall optical illusion was first described by Richard Gregory, professor of neuropsychology at the University of Bristol, in 1979. When alternating columns of dark and light tiles are placed out of line vertically, they can create the illusion that the rows of horizontal lines taper at one end. The effect depends on the presence of a visible line of gray mortar between the tiles. The illusion was first observed when a member of Professor Gregory's lab noticed an unusual visual effect created by the tiling pattern on the wall of a café at the bottom of St Michael's Hill in Bristol. The café, close to the university, was tiled with alternate rows of offset black and white tiles, with visible mortar lines in between. Diagonal lines are perceived because of the way neurons in the brain interact. Different types of neurons react to the perception of dark and light colours, and because of the placement of the dark and light tiles, different parts of the grout lines are dimmed or brightened in the retina. Where there is a brightness contrast across the grout line, a small scale asymmetry occurs whereby half the dark and light tiles move toward each other forming small wedges. These little wedges are then integrated into long wedges with the brain interpreting the grout line as a sloping line. Professor Gregory's findings surrounding the café wall illusion were first published in a 1979 edition of the journal Perception. The café wall illusion has helped neuropsychologists study the way in which visual information is processed by the brain. The illusion has also been used in graphic design and art applications, as well as architectural applications. The effect is also known as the Munsterberg illusion, as it was previously reported in 1897 by Hugo Munsterberg who referred to it as the 'shifted chequerboard figure.' It has also been called the 'illusion of kindergarten patterns', because it was often seen in the weaving of kindergarten students.