logo
#

Latest news with #RiverCrossing

‘Vignettes' by Marty Schnapf Channels the Subconscious in Kaleidoscopic Paintings
‘Vignettes' by Marty Schnapf Channels the Subconscious in Kaleidoscopic Paintings

Hypebeast

time12-07-2025

  • Entertainment
  • Hypebeast

‘Vignettes' by Marty Schnapf Channels the Subconscious in Kaleidoscopic Paintings

Summary Marty Schnapf's solo exhibitionVignettes, now on view atPerrotin Hong Kong, presents a series of kaleidoscopic paintings that blur the boundaries between figuration and abstraction. Drawing from dreams, memories and symbolic archetypes, Schnapf's compositions act as portals into extrasensory realms where bodies dissolve, refract and merge with their surroundings. Water emerges as a central motif both as subject and metaphor, appearing in works like 'Memory Pool' and 'The Pull of the Moon, the Draw of the Shore,' where translucent figures commune with luminescent fish or navigate turbulent seas, evoking the fluidity of the subconscious. Throughout the exhibition, Schnapf's painterly gestures conjure a rhythmic interplay between material and illusion. In 'River Crossing,' a male figure contorts into a knot of limbs, overlaid with midnight-blue brushstrokes that mimic water's movement. Smaller works such as 'Forager' and 'Pensione 3 a.m.' explore interior and psychological landscapes, using warped topographies and visual glitches to suggest metaphysical passageways. These vignettes are rich with allegorical emblems – a hawk, a mask, a flooded piazza – forming a cryptic visual language that invites viewers to decode their own interpretations. Your paintings begin with abstract layers that remain visible, even after the figures are built on top. How do you view that initial chaos or spontaneity in the final work, and what should viewers know about what's 'underneath' the surface? I see abstraction as a way of seeding the work with a kind of primordial vitality. The initial layers offer a plane of gestural expression and experimentation from which form and narrative wrestle their way into existence. In works like 'Forager,' how do you use visual 'glitches' to push that sense of visual tension? 'Forager' is an example of moving the painting through several such lifecycles, from abstraction to representation that is re-abstracted and then redefined several times over. With iteration, glitches can occur that spark new ways of seeing. In this case, the figure appears to exist through time across multiple simultaneous manifestations. 'I never employ a symbol for symbolic sake… We each incorporate within ourselves some part of the lives around us. ' Can you walk us through your use of water as both subject and metaphor in this series? What does it represent for you personally or symbolically? As a subject matter, water is a painter's dream. It can be opaque, transparent, translucent, reflective, refractive, calm, explosive, etc. In its representation, our mind accepts an almost absurd abundance of expressive forms. This may be one of the reasons that it is such a potent symbol of the unconscious. We experience it but can never quite pin it down. I am very intrigued by this place of both knowing and not knowing. There is something deeply honest about such ambiguous states of perception. You've said before that 'Distortion can be more beautiful than clarity.' Can you expand on what this means in your art — and in life? I said that in reference to the relationship between my paintings and music. I was considering how and why distortion had become a sonically impactful element of contemporary music and I realized that in highly mediated environments, a longing arises for the unintended. Distortion in this context reads as a substratum of unbound truth peeking through the cracks. One can reject such emergence and hold tight to their initial composition, but I prefer to engage and integrate the unexpected. Works like 'The Pull of the Moon, The Draw of the Shore' and 'The Grateful Swimmer' evoke intensely dream-like moments. While some are rooted in personal memory, others seem to hover in a more mythic or symbolic space. How do you navigate between the autobiographical and the archetypal in your work? I don't think there needs to be a separation between the archetypal and autobiographical. I never employ a symbol for symbolic sake. Rather, the archetypes arise naturally through attention to lived experience. That said, it doesn't have to be my lived experience. We each incorporate within ourselves some part of the lives around us. Listening to this shared self can call forth imagery that would be inaccessible through a more narrow reflection. ' I found myself responding to something far more thematic than aesthetic — it's the power of chance to change the course of a life and the power of a life to write poetry into that change.' Your work often evokes a strong sense of movement and gesture, almost like choreography. Knowing that you also have had experiences working in performance and contemporary dance before, how do those influences shape the way you construct an image? Every artistic medium has something to teach that cannot be easily gleaned from the others. Yet, those lessons can nonetheless be powerfully incorporated across disciplines. Painting is one of the most elastic forms because it works almost entirely through implication while remaining far less incumbered by physics. For example, although a painting may not traverse space, its elements can be paused at the borderline between one location and another. If done well, the viewer will naturally infer the time before and after. But movement is only the most apparent attribute of dance. Far more important, in my opinion, is embodiment. A master dancer exudes an inarticulable force of presence and it is this more than any other aspect of dance that I seek to draw from. Gesture can provide a key route to complex emotional content — whether it is through the paint stroke in which we feel the movement of the painter's wrist or the painted wrist through which we infer an entire figure. A wonderful drama can be played simply through the oppositional expressions of two gestures made by the same figure. This is your first solo exhibition in Hong Kong. Has the cultural context or physical environment here influenced the way you think about this body of work? This exhibition provided me with my first opportunity to visit Hong Kong. Until now, I have known the region primarily through the films of Wong Kar-wai — especiallyChung King Express. This show provided me with an excuse to revisit these films, as I wanted to hold something of Hong Kong in my mind as I painted. I expected the influence might be primarily atmospheric, perhaps aligning the noir portrayal of Hong Kong with that of Los Angeles. But I found myself responding to something far more thematic than aesthetic — it's the power of chance to change the course of a life and the power of a life to write poetry into that change. Marty Schnapf'sVignettesExhibition opens today at Perrotin Hong Kong and will remain on view until August 16, 2025. Perrotin Hong Kong807, K11 ATELIER Victoria Dockside,18 Salisbury Road, Tsim Sha Tsui, Hong Kong

Apple researchers find ‘major' flaws in AI reasoning models ahead of WWDC 2025
Apple researchers find ‘major' flaws in AI reasoning models ahead of WWDC 2025

Time of India

time09-06-2025

  • Science
  • Time of India

Apple researchers find ‘major' flaws in AI reasoning models ahead of WWDC 2025

A newly published Apple Machine Learning Research study has challenged the prevailing idea that large-language models (LLMs) like OpenAI's o1 and Claude's thinking variants truly possess "reasoning" capabilities. The study indicates fundamental limitations in these AI systems. For this study, Apple researchers designed controllable puzzle environments, such as the Tower of Hanoi and the River Crossing. This approach avoided standard math benchmarks, which are susceptible to data contamination. According to the researchers, these custom environments allowed for a precise analysis of both the final answers produced by the LLMs and their internal reasoning traces across different complexity levels. What Apple researchers have found out from this study According to a report by MacRumors, the reasoning models tested by Apple's Research team, including o3-mini, DeepSeek-R1, and Claude 3.7 Sonnet, saw their accuracy collapse entirely once problem complexity crossed certain thresholds. Success rates dropped to zero even though the models had sufficient computational resources. Surprisingly, as problems became harder, the models reduced their reasoning effort. This points to fundamental scaling limitations rather than a lack of resources. Even more revealing, the models still failed at the same complexity points even when researchers provided complete solution algorithms. This indicates that the limitation lies in basic logical step execution, not in choosing the right problem-solving strategy. The models also showed puzzling inconsistencies. They were able to solve problems requiring over 100 moves but failed on simpler puzzles that needed only 11 moves. The study identified three performance patterns. Standard models unexpectedly performed better than reasoning models on low-complexity problems. Reasoning models had an advantage at medium complexity. Both types failed at high complexity. Researchers also discovered that models exhibited inefficient "overthinking" patterns, often discovering correct solutions early but wasting computational effort exploring incorrect alternatives. The key takeaway is that current "reasoning" models rely heavily on advanced pattern matching, not true reasoning. These models do not scale their reasoning the way humans do. They tend to overthink easy problems and think less when faced with harder ones. It is worth noting that this research surfaced just days before WWDC 2025. According to Bloomberg, Apple is expected to focus on new software designs rather than headline-grabbing AI features at this year's event. AI Masterclass for Students. Upskill Young Ones Today!– Join Now

Apple AI research shows reasoning models collapse when problems are more complex
Apple AI research shows reasoning models collapse when problems are more complex

The Hindu

time09-06-2025

  • Science
  • The Hindu

Apple AI research shows reasoning models collapse when problems are more complex

A research paper from Apple published on June 6 stated that although large reasoning models (LRMs) showed improved performance on benchmarks, they struggled with accuracy when the problems became more complex. Titled, 'The Illusion of Thinking: Understanding the Strengths and Limitations of Reasoning Models via the Lens of Problem Complexity,' the paper revealed that even the most advanced AI reasoning models collapsed entirely when facing harder problems. 'They exhibit a counter- intuitive scaling limit: their reasoning effort increases with problem complexity up to a point, then declines despite having an adequate token budget,' the paper noted. To test the AI models, the researchers categorised the problems into low complexity, medium complexity and high complexity tasks which included a bunch of puzzles like Checkers Jumping, River Crossing, Blocks World and the Tower of Hanoi. The researchers picked Claude 3.7 Sonnet and DeepSeek-V3 from among the large language models and the Claude 3.7 Sonnet with Thinking and DeepSeek-R1 from among the large reasoning models. The research concluded that both the types of AI models had a similar level of capability. For low complexity problems, the models were found to solve the puzzles but as they proceeded to the high complex category, both AI models failed to work. The hardware giant has been seen as lagging behind in developing AI technology. Notably, Apple's annual Worldwide Developers Conference is also expected to begin later today.

Apple Debunks AI Reasoning Hype: Models Memorise, Don't Think, Study Reveals
Apple Debunks AI Reasoning Hype: Models Memorise, Don't Think, Study Reveals

NDTV

time09-06-2025

  • NDTV

Apple Debunks AI Reasoning Hype: Models Memorise, Don't Think, Study Reveals

Apple has claimed that new-age artificial intelligence (AI) reasoning models might not be as smart as they have been made out to be. In a study titled, The Illusion of Thinking: Understanding the Strengths and Limitations of Reasoning Models via the Lens of Problem Complexity, the tech giant claimed that reasoning models like Claude, DeepSeek-R1, and o3-mini do not actually reason at all. Apple claimed that these models simply memorise patterns really well, but when the questions are altered or the complexity increased, they collapse altogether. In simple terms, the models work great when they are able to match patterns, but once patterns become too complex, they fall away. "Through extensive experimentation across diverse puzzles, we show that frontier LRMs face a complete accuracy collapse beyond certain complexities," the study highlighted. "Moreover, they exhibit a counterintuitive scaling limit: their reasoning effort increases with problem complexity up to a point, then declines despite having an adequate token budget," it added. For the study, the researchers flipped the script on the type of questions that reasoning models usually answer. Instead of the same old math tests, the models were presented with cleverly constructed puzzle games such as Tower of Hanoi, Checker Jumping, River Crossing, and Blocks World. Each puzzle had simple, well-defined rules, and as the complexity was increased (like more disks, more blocks, more actors), the models needed to plan deeper and reason longer. The findings revealed three regimes. Low complexity: Regular models actually win. Medium complexity: Thinking models show some advantage. High complexity: Everything breaks down completely. AGI not as near as predicted? Apple reasoned that if the reasoning models were truly 'reasoning', they would be able to get better with more computing power and clear instructions. However, they started hitting walls and gave up, even when provided solutions. "When we provided the solution algorithm for the Tower of Hanoi to the models, their performance on this puzzle did not improve," the study stated, adding: "Moreover, investigating the first failure move of the models revealed surprising behaviours. For instance, they could perform up to 100 correct moves in the Tower of Hanoi but fail to provide more than 5 correct moves in the River Crossing puzzle." With talks surrounding human-level AI, popularly referred to as Artificial General Intelligence (AGI), arriving as early as 2030, Apple's study suggests that it might not be the case, and we might be some distance away from sentient technology.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store