Notable Tech Trends in 2025

by akwaibomtalent@gmail.com

Director Danny Boyle and DP Anthony Dod Mantle demonstrate iPhone camera rigs used to shoot “28 Years Later.”

Telluride is not on my annual festival calendar—too distant, too costly—and rarely I’m at Toronto, only with a film. So as a New Yorker, my personal fall awards season kicks off with the New York Film Festival. Like Telluride, it’s a non-competitive festival that showcases and celebrates the best of each year’s new crop, and it has the advantage of relying less on premieres and more on outstanding films gleaned from earlier festivals like Berlin, Cannes, and Sundance. As such, it’s a pretty good sampler of the latest trends coursing through the art and practice of cinematic expression.

This year I noticed a lot of big red Ns in front of NYFF selections. When I tallied the 29 films I saw there, however, only four of them actually bore the scarlet letter. I guess I’ve grown oversensitive to a logotype that consigns the likes of Jay Kelly and Frankenstein to three-week theatrical windows, or two weeks for lesser draws like A House of Dynamite, Train Dreams, and Left-Handed Girl. My tallying effort also revealed that eight of the films I saw at NYFF would be streaming on MUBI. Clearly not a scientific poll, but encouraging.

As for tech trends that caught my eye in 2025, I’ll start with something else I noticed at NYFF.

ASPECT RATIO LIBERATION

Four films I caught at NYFF in September featured the outmoded 1.33:1 aspect ratio. This was the OG motion picture aspect ratio, dating from the earliest 35mm silent films. In the early 1930s, the Academy (AMPAS) ever-so-slightly adjusted the camera aperture to 1.37:1 while making room for a sound track alongside the picture. In practice, there’s no discernible difference between 1.33 and 1.37, and screens were always 1.33 only. Nonetheless, Sergei Loznitsa‘s Two Prosecutors, a grim drama set mostly in a drab Soviet prison during Stalin’s Great Purge, adopts a 1.37 ratio, despite being shot with an ARRI Alexa Mini. Loznitsa relies on a static camera, desaturated hues, and murky lighting. Critics laud his choice of frame shape as “claustrophobic,” one that “allows little space to breathe.” Huh? Six decades of commercial dramas and comedies were claustrophobic?

Also screened as 1.37 was Mascha Schilinski’s Sound of Falling, a visually sensorial glimpse at four generations of German women on the same farm in four different time periods. Shot with vintage lenses on an ARRI Alexa Mini and Sony FX6, with an occasional pinhole lens too, it premiered in Competition at Cannes and shared the Jury Award. 

Screened as 1.33 at NYFF was Rose of Nevada, one-man-band filmmaker Mark Jenkin’s distinctive mélange of ghost-ship, horror, time-travel, and working-class toil in a Cornish fishing village. It was screened as 1.33 because it was shot retro, in standard 16mm, often with a wind-up Bolex. Also in 1.33 was Lav Diaz’s Magellan, with Gael García Bernal as the doomed Portuguese explorer. Magellan was shot with a Panasonic Lumix GH7, which resembles a stills camera. The GH7’s Micro Four Thirds sensor has a 1.33 aspect ratio, and Diaz utilized the entire sensor without cropping, a technique called “open gate.” 

Note that 1.33 isn’t natively supported in digital projection. Digital projector imaging chips are either 4096 × 2160 (4K) or 2048 × 1080 (2K) pixels, and their native aspect ratio is always 1.91. To theatrically project 1.33 with a digital projector, black bars are inserted to the right and left of the 1.33 image when the DCP is made. This is called “pillar boxing.” With luck a theater’s screen has adjustable masking or curtains on the right and left to mask off these added black sections. I was at a small film festival earlier this month where such masking was not available, and 1.33 films were projected with a slight trapezoidal shape due to “keystoning” caused by the projector’s steep angle relative to the screen. Apparently the projection system wasn’t equipped to compensate for this common flaw in projected image geometry. Adjustable masking or curtains disguise this problem by squaring the right and left sides of the projected image. (A good discussion of aspect ratios and projection is here.) 

I’m not questioning an artist’s right to decide the shape of their canvas. Painting and drawing have always enjoyed as much. In classic movie palaces, moreover, 1.33 screens could be thirty feet tall, filled with monumental close-ups of faces, making quite an impression on audiences. But I do think that sometimes filmmakers overextend significance to inessential technical details that hardly matter compared to true essentials like story, score, performance, cinematography, art direction, etc. A certain celebrated filmmaker, for instance, switches between frame rates based on a scene’s action quotient, an effect few audience members are equipped to notice or care about. Other filmmakers loudly tout large film formats that inflate production budgets, in spite of mediocre outcomes that could just as effectively been shot with ARRI Alexas or Sony Venices. In the era of digital projection and DCPs, all filmmaking becomes digital anyway. 

That said, other current releases that have hopped on the 1.33 (or 1.37) bandwagon include Lynne Ramsay’s Die My Love (Jennifer Lawrence, Robert Pattinson) and Clint Bentley’s Train Dreams (Joel Edgerton, Felicity Jones). 

Yorgos Lanthimos’s Bugonia (Emma Stone, Jesse Plemons) was shot on Paramount’s 1950’s format VistaVision, a 35mm 8-perf format that moves sideways in the camera, akin to a 35mm stills camera. It has an open-gate 1.50:1 aspect ratio, and Lanthimos used the entire 1.50 open-gate VistaVision image for digital “flat” projection with pillar boxing, in contrast to VistaVision’s usual role as a film format for non-anamorphic widescreen. Bugonia used VistaVision cameras that had in fact come off Paul Thomas Anderson’s earlier shoot for One Battle After Another. Remarkably, Anderson’s film was simultaneously released in no less than three formats: IMAX GT Laser & IMAX 70mm in a 1.43:1 aspect ratio, VistaVision in an open-gate 1.50:1 aspect ratio, and conventional 1.85:1 formatted, as usual, to fit with negligible pillar-boxing inside a DCP’s 1.91:1 frame. 

I happened to first see One Battle After Another on a huge screen projected digitally from a DCP. The opening scene is a nighttime raid on a U.S.-Mexico border immigrant detention center led by charismatic revolutionaries played by Leonardo DiCaprio and Teyana Taylor, who, descending with abandon, taunt guards and plant explosives. I thought the opening scene was beautifully shot: plenty of rich, dark tones and detail in the shadows. A week later I accompanied renown sound editor and re-recording mixer Larry Blake to a screening of a 35mm VistaVision film print struck directly from the original negative and projected at the Regal Union Square 17 in New York. 

In a VistaVision projector, 35mm film travels sideways, eight perfs at a time. There are only four working VistaVision projectors in the world, so this was a rare opportunity. That opening scene? It was dark, not in a good way. Colors were dull and desaturated, dark detail was crushed. There was random dirt on the print and the projection flickered noticeably. Whenever reel breaks approached, there was dirt build-up, just like in the old days of film projection. The whole experience was meh. Give me digital projection any day!

One key difference, besides projector brightness, was that the digitally projected version was digitally graded. Digital color correction and grading bring unprecedented degrees of adjustment to all aspects of image reproduction, not just color. Brightness and gamma can be tweaked separately all along the tonal scale, a level of refinement that color film timing can’t approach. That VistaVision film print, on the other hand, could only be color-corrected by a film timer whose adjustments are limited to Bell & Howell red, green, and blue printer points, each point equal to about a sixth of a stop of adjustment in color or overall brightness. Primitive by comparison.

At NYFF, Bradley Cooper evinced palpable excitement about shooting his marriage-breakup/comedy-club romp, Is This Thing On?, in a 1.66:1 aspect ratio. Much of the camerawork was handheld, much of it reportedly by Cooper himself. If you’re not familiar with 1.66, it’s probably because 1.66, too, is outmoded. America adopted the 1.85:1 aspect ratio for theatrical projection in 1953 as an answer to the threat posed by the 1953 introduction of NTSC color television. Another Hollywood answer to television in 1953 was anamorphic Cinemascope, introduced by 20th Century Fox. That same year Paramount, in turn, introduced 1.66, a non-anamorphic widescreen format with an appearance slightly less wide than 1.85. While 1.66 didn’t catch on in the U.S., Europe preferred it to 1.85. (Paramount introduced VistaVision a year later, in 1954.)

In reality, 1.85 and 1.66 were only ever projection formats, because all 4-perf 35mm original negatives were shot either “full aperture” in the original 1.33 silent format or with an Academy 1.37 hard matte in the camera’s aperture, regardless of intended projection aspect ratio. Why place a tighter 1.85 or 1.66 hard matte in the camera aperture and risk getting visible hairs in the gate when you could shoot with a looser, taller 1.37 Academy matte? The top and bottom edges of the 1.37 matte could accumulate schmutz (a film technical term) and no one would ever see it. That same print with an Academy aperture frame could be screened with either a 1.85 mask in the projector gate in the U.S. or 1.66 mask in Europe. Moreover, a 35mm print with Academy 1.37 framing could readily be threaded onto a film chain or telecine for TV broadcast in 1.33, both here and in Europe. No panning or scanning needed.

In the late 1970s through the 1980s, 1.66 found a secondary use in making 35mm blow-ups from standard 16mm. The crop from the original 1.33 image to a 35mm 1.66 frame was less severe than a crop to 1.85. Sometimes this was beneficial to 16mm films with scenes not composed with blow-up to widescreen 35mm in mind. 

If Bradley Cooper’s 1.66 film were placed inside your own 16:9 HD or 4K TV, which is 1.78:1, you would see thin black lines—pillar boxing—on the right and left sides. Having seen Cooper’s Is This Thing On?, I can say with certainty that his 1.66 aspect ratio would make not a whit of difference (another film technical term) to your enjoyment of his story. Tech specs can sometimes simply be too fine-grained. 

The same might be said of Danny Boyle’s 28 Years Later, which he shot mostly on iPhone 15s using anamorphic lens attachments (more on this below), and which he finished in a widescreen 2.76:1 aspect ratio that he claims is demonstrably superior to conventional 2.40. You be the judge. Note that “Scope” projection aspect ratios of 2.35, 2.39, 2.40 are, for all practical purposes, the same. Each was once an incremental accommodation to sound-track requirements. For that matter, all 35mm Cinemascope film cameras utilize full apertures and can still shoot in the original 2.66, which is silent film’s 1.33 aperture times two, since Cinemascope’s anamorphic lenses apply a 2:1 squeeze. 

How many whits of difference separate early Cinemascope 2.55 (another accommodation for sound) from Boyle’s 2.76? Enough to fundamentally alter your enjoyment of Boyle’s story? Or have aspect ratios lately joining the ranks of marketing tools, like high frame rates, large film formats, or shooting on film itself? One thing is for sure: if it’s projected digitally—and what isn’t these days?—it must fit inside a DCP’s 1.91:1 real estate on the big screen. Stay within this boundary and you’re free to invent any aspect ratio you want by altering those black borders, top/bottom or left/right. Let the marketing begin!

TIPPING POINT IN USE OF SMART PHONES AS CAMERAS

Even the general public knows you can now shoot a movie on an iPhone. Apple’s own marketing has seen to this. My recent interview with Julia Loktev about how she made her acclaimed 5 1/2 hour epic documentary, My Undesirable Friends, about young Russian television journalists forced to flee, explores the techniques she favored, including use of a Moment Tele 58mm Mobile Lens. As has been widely shared online, to film 28 Years Later, Danny Boyle and DP Anthony Dod Mantle used iPhone lens attachments like Beastgrip’s Pro Series 1.55X Anamorphic Lens MK2 along with Atlas Mercury T2.2 1.5x Anamorphic Primes with Beastgrip PL-to-iPhone adapters. (According to Beastgrip, 1.85 x 1.55 yields a 2.76 aspect ratio.)

Just released is Shih-Ching Tsou’s Left-Handed Girl, a contemporary story of a mother and her two daughters struggling to run a food stand at a night market in Taipei, Taiwan. The youngest daughter, a charming scamp, is left-handed, often associated in Asia with bad luck. By the end, family secrets burst into the open. Tsou is a longtime collaborator with Executive Producer Sean Baker, who co-wrote the script and edited the film. Tsou, in fact, was a producer on Baker’s 2015 breakthrough, Tangerine, famously shot on iPhones with anamorphic lenses. So too, is Left-Handed Girl, occasionally marked by the nighttime blue streaks typical of anamorphic lenses. You can catch Left-Handed Girl on Netflix.

“iPhoneography,” for lack of a better term, is a worldwide trend that can only go more mainstream as smartphones continue to annually enhance their video capabilities. Just be aware of the role that so-called “computational photography” plays in enabling and automating video capture in such small devices, simplifying their operation but at the same time removing photographic control by the operator. There are apps to circumvent this of course, but as AI makes inroads into onboard video image processing, the smart phone will increasingly call the shots on its own. 

HYBRID FULL-FRAME MIRRORLESS CAMERAS COME OF AGE

I don’t want to dwell on this development since it’s had a long arc, tracing back to Canon’s popular 5D Mark II in 2008. Sony’s video-centric mirrorless A7S III introduced in 2020 was a high water mark, a superb low-light, fanless 4K camera that didn’t overheat. Last year saw the introduction of Nikon’s Z6 III, which records RAW and ProRes RAW up to 6K. This year’s fanless Canon EOS R6 Mark III offers a 7K open gate and can record uncompressed Canon RAW. Nikon, which absorbed RED, has just introduced their Nikon ZR 6K. Sans viewfinder, it’s comparable to a Sony FX3. Nikon says it “fuses” Nikon and RED cinema technology. Bottom line is that if you don’t wish to shoot with a smartphone’s puny sensor and limited controls, and don’t mind an affordable large-sensor camera that resembles a stills camera, your options are multiplying.

CINEMA CAMERAS IN BROADCAST

I mentioned this in last year’s tech roundup, but you may not yet be aware of it. Every major cine camera manufacturer, including ARRI, have repurposed or adapted their top-shelf cinema cameras for use in premium broadcasting. ARRI’s 4K Alexa 35 Live-Multicam system, for instance, boasts both HDR and their signature cinematic look. At base it’s an Alexa with an optical fiber adapter and fiber base station capable of supporting cable lengths over a mile. Sony, Panasonic, and Nikon/RED have similar offerings. Another sign that a quarter century into the digital video era, cine and broadcasting realms are merging.

CHINESE AUTOFOCUS AND CINEMA LENSES

Not a week goes by that I don’t learn of some new low-cost, high-aperture Chinese lens for mirrorless or cine cameras. I can’t even keep track of all the companies now making them. This remains the case despite the gathering of dark clouds called tariffs (see below). Each year at NAB’s equipment show in Vegas, I visit the booths of Chinese lens manufacturers to kick the tires. Several years ago, these lenses felt heavier, not as refined as traditional cine lenses or lenses made in Japan or Korea. Not any more. Despite significantly lower retail prices, physical build quality has steadily improved and Chinese lens manufacturers are now innovating, introducing into the market lens designs that simply don’t exist elsewhere. This year I bought my first Chinese lens, a Venus Optics Laowa 180mm f/4.5 for Sony E-mount. Its superpowers are autofocus and a 1.5:1 macro capability. It wasn’t expensive. I needed a longer prime and also a macro lens, and aside from this Laowa 180mm, there really isn’t anything else that combines these two attributes.

When it comes to autofocus, however, it pays to be cautious. Sony’s best autofocus lenses, for example, use Sony’s advanced linear motors. Many Chinese autofocus lenses use older, slower tech like stepper motors or voice-coil motors. Their manufacturers must, as well, reverse-engineer the autofocus protocols their lenses will need to interact with products from Sony, Canon, Panasonic, and Nikon/RED. Sometimes these lens autofocus systems work OK for still photos but don’t operate smoothly or continuously enough for video. Always test them first!

NATURAL LIGHT AND UNDEREXPOSURE

More than ever, the choice of whether or not to use artificial light is an artistic one, no longer a matter of necessity. Sensors in all digital cameras are so sensitive, smartphone owners just pull them out and start filming, indoors and out. Most large-sensor cameras are equally sensitive, if not more so. I suspect most young DPs no longer know how to use a light meter. Nor should they. It’s easier and more informative to check a monitor. 

Whatever the source, light is a gift. It takes years to understand, and you must study it devotedly. Ask any realist painter. Shooting with available light, then, is an art unto itself, the yin to the yang of classic film lighting, not simply a matter of pointing a camera. In many contemporary films, what I have a problem with are scenes in which dark tones are so indistinct, whether from bad grading decisions, significant underexposure, or both, that meaningful scene information is lost. I also have a problem with the artlessness, if not mindlessness, of shooting in flat, dull interior lighting that flatters no one. 

Too many films I saw this year suffer from one or both of these afflictions, and the phenomenon seems to be growing. I don’t really want to call out or embarrass anybody, although I can point to, at NYFF, Magellan as an example of underexposure and Is This Thing On? or A Private Life as examples of flatness and over-reliance on interior practicals. 

Some films shot almost entirely with natural light manage to sing. Clint Bentley’s Train Dreams, for instance. Other films think they’re the second coming of Rembrandt or Vermeer when they’re simply underlit. In Chloé Zhao’s Hamnet, I found the strong interior backlighting and silhouetting from windows, scenes in which sometimes you can’t see any trace of characters who are delivering lines, to be distracting, to say the least. 

In fairness to the DPs involved, sometimes it’s the case that in the color grading process, contrast is toned down, colors desaturated, and shadow detail crushed without input from the DP. Audiences tend to assume that what they see on the screen is what the DP intended, and often it is. But sometimes a production takes the reins in post and decisions are made that have nothing to do with the DPs intentions, nor do they reflect the DPs experience and sensitivities. 

Also, grading suites typically feature calibrated 10-bit monitors and sometimes even calibrated projection. These high-quality monitors often readily reproduce the subtlest of shades of black detail in a room with subdued lighting. The problem is that in the real world, in commercial theaters, or at home on your 8-bit TV, these subtleties are more often than not entirely lost. 

ALL-IN-ONE APPS

When I first encountered DaVinci Resolve in the 1980s, it was a quarter-million-dollar hardware-based system. After Florida-based parent company da Vinci Systems went bankrupt, Blackmagic Design in 2009 acquired its assets. In 2011 Blackmagic issued a free software version of Resolve, retaining most of the capability of the paid version. In the fifteen years since, Blackmagic has acquired other companies and intellectual properties, incorporating into Resolve facsimiles of Fairlight audio mixing and Fusion VFX/motion graphics and building out two different Resolve editing systems. What makes this “all-in-one” approach so compelling is that it has eliminated “round-tripping,” the exporting out of clips, so they can be exported into other specialty applications, then re-imported into Resolve. With everything “under one roof,” you never have to leave Resolve. It’s incredibly convenient and efficient. I now see another Australian company going down the same path. Popular Photoshop alternative Affinity has been acquired by Canva, which has launched a new paradigm. The free version of Affinity now incorporates modules for photo editing, vector design, and page layout—kind of like a mashup of Photoshop, Illustrator, and InDesign, but without the stiff annual subscription fees. The paid version of Affinity adds generative AI capability. In either case, whatever you do with photos, text, design, and layout, you never have to leave the app. App consolidation like this feels like the future. 

AI IN DESKTOP POST

I stuck my toe in the waters of AI models when I signed up for Topaz Sharpen AI early in 2021. My need at that time was to repair and restore worn family photos, and I learned a lot in the process of experimenting with Topaz: what it was good at, what it was not good at. Five years later the Topaz suite of AI-based apps is a success story. Documentary filmmakers I know from around the world have mentioned using Topaz Video AI to repair vintage stock footage, improve the appearance of graininess in old 16mm, and generally enhance their footage. Of course it helps to have a late-model computer with plenty of RAM and GPU power, although Topaz can run slowly on CPU processing. Also necessary are patience, time to experiment, and a sharp eye. And while there are multiple AI models to choose from in Video AI, no one size ever fits all. Often each clip requires a different AI model and configuration of settings. With each year, as well, Topaz’s AI models seem to evolve. I have found Video AI especially useful in upressing from SD to HD video files digitized from poor-quality VHS tapes. But AI models make mistakes, especially with regard to fine detail in motion against a fuzzy background. After processing with an AI model, you will also want to examine faces very closely, particularly if they are small and your source material is soft and grainy. As you might guess, Topaz is no longer the only AI game in town, and I expect to see rivals gain ground. What I’d really like to see from one of them are AI models dedicated to addressing the needs of film scans: a model focused on adjusting the appearance of film grain, another model dedicated to dust-busting, a third model dedicated to eliminating vertical white scratches, traveling or fixed, from scratched film negatives. Anyone out there listening?

GOVERNMENT INTERVENTION

I hope that by the end of next year, reciprocal tariffs will be in the rear view mirror. It’s not impossible. Meanwhile, that $399 German Petzval lens I had my eye on is now $599. I wrote an entire Filmmaker article on the marketplace distortions caused by tariffs in our industry. It’s bracing to realize that most of our gear, digital or otherwise, is either made in China or contains Chinese components. The world’s major economies are inextricably linked, for the better in my opinion. The question remains, as cooler heads prevail, will prices eventually come back down?

You may not have heard the news, but as of December 23, 2025, the next generation of drones from DJI are illegal to import. The FCC has banned the import and sale of all new drone models made by foreign manufacturers including DJI. Something about “an unacceptable risk to the national security of the United States.” Will handheld gimbals be next?

There are lots of additional trends I haven’t touched on, like cloud-based collaboration during production and post using Frame.io or Blackmagic Cloud. Or engaging with, via the internet, editors, colorists, animators, or sound mixers who live in distant countries. Nor have I touched on the incursions of generative AI, such as generative fill and inpainting already available in Adobe Premiere Pro and DaVinci Resolve. I’ve got to save something for next year’s tech roundup! 

 

IN MEMORIAM 2025 – TECH INNOVATORS WE LOST

• Dedo Weigert (1938-2025): cameraman, filmmaker, imaginative inventor, fluid-head pioneer and manufacturer of ingenious lighting, also world-class raconteur. 

• Otto Nemenz (1941-2025): cameraman, DP, inventor, founder of one of Hollywood’s best-loved rental facilities. 

• Joel DeMott (1947-2025): camerawoman, filmmaker, 1970s pioneer of one-man-band filming with modified sync camera and onboard Nagra SN audio recorder while holding camera and mic at same time. 

You may also like

Leave a Comment