Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
Displays Iphone Apple

iOS Beta Adds 'Spatial Video' Recording. Blogger Calls Them 'Astonishing', 'Breathtaking', 'Compelling' (daringfireball.net) 95

MacRumors writes that the second beta of iOS 17.2 "adds a new feature that allows an iPhone 15 Pro or iPhone 15 Pro Max to record Spatial Video" — that is, in the immersive 3D format for the yet-to-be-released Apple Vision Pro (where it can be viewed in the "Photos" app): Spatial Video recording can be enabled by going to the Settings app, tapping into the Camera section, selecting Formats, and toggling on "Spatial Video for Apple Vision Pro..." Spatial Videos taken with an iPhone 15 Pro can be viewed on the iPhone as well, but the video appears to be a normal video and not a Spatial Video.
Tech blogger John Gruber got to test the technology, watching the videos on a (still yet-to-be-released) Vision Pro headset. "I'm blown away once again," he wrote, calling the experience "astonishing."

"Before my demo, I provided Apple with my eyeglasses prescription, and the Vision Pro headset I used had appropriate corrective lenses in place. As with my demo back in June, everything I saw through the headset looked incredibly sharp..." The Vision Pro experience is highly dependent upon foveated rendering, which Wikipedia succinctly describes as "a rendering technique which uses an eye tracker integrated with a virtual reality headset to reduce the rendering workload by greatly reducing the image quality in the peripheral vision (outside of the zone gazed by the fovea)..." It's just incredible, though, how detailed and high resolution the overall effect is...

Plain old still photos look amazing. You can resize the virtual window in which you're viewing photos to as large as you can practically desire. It's not merely like having a 20-foot display — a size far more akin to that of a movie theater screen than a television. It's like having a 20-foot display with retina quality resolution, and the best brightness and clarity of any display you've ever used... And then there are panoramic photos... Panoramic photos viewed using Vision Pro are breathtaking. There is no optical distortion at all, no fish-eye look. It just looks like you're standing at the place where the panoramic photo was taken — and the wider the panoramic view at capture, the more compelling the playback experience is. It's incredible...

As a basic rule, going forward, I plan to capture spatial videos of people, especially my family and dearest friends, and panoramic photos of places I visit. It's like teleportation... When you watch regular (non-spatial) videos using Vision Pro, or view regular still photography, the image appears in a crisply defined window in front of you. Spatial videos don't appear like that at all. I can't describe it any better today than I did in June: it's like watching — and listening to — a dream, through a hazy-bordered portal opened into another world...

Nothing you've ever viewed on a screen, however, can prepare you for the experience of watching these spatial videos, especially the ones you will have shot yourself, of your own family and friends. They truly are more like memories than videos... [T]he ones I shot myself were more compelling, and took my breath away... Prepare to be moved, emotionally, when you experience this.

This discussion has been archived. No new comments can be posted.

iOS Beta Adds 'Spatial Video' Recording. Blogger Calls Them 'Astonishing', 'Breathtaking', 'Compelling'

Comments Filter:
  • by ACForever ( 6277156 ) on Sunday November 12, 2023 @11:05PM (#64001323)
    sounds like all picked from the pre-approved apple adjectives list
    • by ls671 ( 1122017 )

      sounds like all picked from the pre-approved apple adjectives list

      This is not new anyway, my uncle went to the space station 10 years ago and he brought back several "spatial videos" he recorded with his iPhone. /s

    • by systemd-anonymousd ( 6652324 ) on Monday November 13, 2023 @02:23AM (#64001571)

      It's from Daring Fireball. He's been a hardcore Apple shill for over a decade

      • Re: (Score:3, Interesting)

        by AmiMoJo ( 196126 )

        Give that the iPhone doesn't have stereo cameras, they must be using the depth detection stuff and a single lens. Depth detection is used for "portrait mode", where the background is blurred based on the device guessing which pixels are the subject and which are further back.

        It often looks a bit janky, especially around difficult edges like hair. So presumably these videos look just as bad.

        • by NoMoreACs ( 6161580 ) on Monday November 13, 2023 @09:00AM (#64002039)

          Give that the iPhone doesn't have stereo cameras, they must be using the depth detection stuff and a single lens. Depth detection is used for "portrait mode", where the background is blurred based on the device guessing which pixels are the subject and which are further back.

          It often looks a bit janky, especially around difficult edges like hair. So presumably these videos look just as bad.

          But the iPhone 15 does have stereo cameras:

          https://external-content.duckd... [duckduckgo.com]

          You have to shoot in Landscape Mode. No fakery here.

          • by AmiMoJo ( 196126 )

            One is an ultrawide though. I suppose with some hackery they could get two similar enough images out of both to produce a single stereo view, but it won't look particularly good.

            There's a reason why nobody even attempts it for standard portrait mode photos, or 3D photos.

            • One is an ultrawide though. I suppose with some hackery they could get two similar enough images out of both to produce a single stereo view, but it won't look particularly good.

              There's a reason why nobody even attempts it for standard portrait mode photos, or 3D photos.

              People say it looks great; so?

              Remember, lotsa computational photography going on.

            • The article explains this:

              * You have to shoot spatial video holding the iPhone horizontally to get the two cameras side-by-side
              * The spatial video is limited to 1K 30fps because of the limitations of the wide angle camera

              • by AmiMoJo ( 196126 )

                Thanks. So it sounds like it's pretty shit then. 1k is SD, not even HD. Blown up to huge proportions for your headset.

                If they had a better processor available, the way to do it would be to create a 3D model and texture it using the primary camera. There are some video conferencing systems that do that, and apparently the effect is quite good. But they are pushing much higher quality images too.

                • Good lord, have you thought that through even a little bit? It's the first generation of stereo recording "on your fucking phone" for Christ's sake. You've written it off because it's not 4K stereo, recorded from cameras on gimbals four feet apart.

                  Reminds me of seeing Toy Story with a girl I dated, 15 years after it was made. She said to me, "This computer animation is really bad." I said, "You understand this was basically new to the world when it was made... right? Like, the very first of it's kind?" And

                  • by AmiMoJo ( 196126 )

                    I'm just comparing it to the claims in TFA which say it's amazing.

                    Maybe the iPhone 16 will have two normal cameras, placed further apart.

                    • Well, okay. "Amazing" is amazingly subjective. I think Pacific Rim looks "amazing" on good old 1080p non-HD BluRay.

                • Thanks. So it sounds like it's pretty shit then. 1k is SD, not even HD.

                  By "1K" they mean 1080p, which is HD. So you are consistently wrong.

                  • by AmiMoJo ( 196126 )

                    That would make 4k actually 2k. It's 2160 pixels high.

                    The k generally refers to the nearest multiple of 1000 horizontal pixels.

                  • Nope:

                    HD standards:
                    720p = 1k = 1280 x 720
                    1080p = 2k = 1920 x 1080

                    UHD standards:
                    1440p = 3k = 2560 x 1440
                    2160p = 4k = 3840 x 2160
                    5k = 5120 × 2880
                    8k = 7680 × 4320

                    You're thinking of the wrong axis. 1k is still HD, but only just, and only in comparison to VGA, NTSC, etc. decades-old crap.

                • Thanks. So it sounds like it's pretty shit then. 1k is SD, not even HD. Blown up to huge proportions for your headset.

                  If they had a better processor available, the way to do it would be to create a 3D model and texture it using the primary camera. There are some video conferencing systems that do that, and apparently the effect is quite good. But they are pushing much higher quality images too.

                  You're an idiot.

                  480p is SD. 720p is the low end of HD.

                  They are apparently recording and saving 2 complete video streams. No bullshit "3D modeling" to do this.

                  They are probably doing a little computational magic to make the two different camera images to align better, and likely to enhance the "spread"; but that appears to be about "it".

                  Nobody who has tried it has been unimpressed. Quite the opposite, in fact.

    • by dfghjk ( 711126 )

      Well it's coming from one of Apple's favorite paid shills. That's the vocabulary.

    • sounds like all picked from the pre-approved apple adjectives list

      Not really. He sounds like he's describing the *bare minimum expectations* for what is going to be the most expensive piece of VR gear on the planet. I expect this device to not only blow my socks off, but to blow me at the same time given what it costs.

    • Yeah, they could have mixed it up. Or finished out the alphabet: 'Disastrous', 'Error-prone', 'Failure', etc. ;-)
    • by Tarlus ( 1000874 )

      He forgot "Magical", "Innovative" and "Groundbreaking".

  • by illogicalpremise ( 1720634 ) on Sunday November 12, 2023 @11:05PM (#64001325)

    Someone posted this on HackerNews 3 years ago in reaction to his breathless panting about the M1 chip being the best thing ever:

    "Yeah John Gruber has really become a shill over the last few years. His article reads like an Apple keynote transcript, everything is magical and amazing, a marvel and mindbogglingly great. No wonder they throw him so many exclusives ;)"

    So John drooling over another Apple proprietary technology should be taken with some very large grains of salt.

  • The average person's eye can detect a resolution 60 dpi .. the Apple Vision Pro is around 35. That's way better than the Quest 2, but there's no way the picture can be crisp. It's possible teh eliminated the screendoor effect by getting rid of the space between pixels .. but I don't see how the image can appear crisp.

    This affirms Gruber's shill nature .. no wonder they let him try the product. Kissing ass pays off.

    • by Rei ( 128717 ) on Monday November 13, 2023 @06:51AM (#64001789) Homepage

      The average person's eye can detect a resolution 60 dpi

      How does that claim make any sense? The resolution the human eye can detect depends on the distance the screen is from the eye. I think you meant 60 arc-seconds (1 arc minute), or 60 ppd (pixels per degree), which is accurate, but only with 20-20 eyesight in ideal lighting conditions. In dim lighting, the maximum resolution drops dramatically. At the closest possible distance the eye can fully focus on (~25cm / 10") that's something like 350 dpi / ppi.

      The resolution of the eye is interesting. In daylight with 20/20 vision, it's like a 7MP camera, except that 6MP of that is in an area the size of your two thumbnails together with your arm extended (the fovea), and there's 1MP for "everything else". But that 1MP is generally enough to notice if something changed, which triggers the brain to have your eyes dart briefly to look at the thing that changed, putting it in foveal vision and updating your mental map of the scene. Your brain also eliminates the blind spot (fun experiment - close your right eye, pick a distant point, put your left thumb next to it, and then slowly - while looking at the point, not your thumb, move your thumb left. At a certain point, it'll suddenly disappear :) ).

      At night, however, cone performance drops dramatically and you rely on rods. While rods are still concentrated toward the centre, they're not nearly as concentrated as cones are. So the central resolution is much lower, which is one key reason why it's hard to read at night.

      But the DL/DR: having a high-resolution display but using much more processing power where the fovea is focused than everywhere else is a great idea that I'm glad to finally see make it into practice. And even if the resolution is lower than the nominal ideal resolution for human vision, one can't assume perfect vision and perfect lighting in most cases. Plus there's a difference between "being technically able to make out finer details if they were present" and "noticing that you're not making out finer details which you technically would be capable of doing"

    • by dfghjk ( 711126 )

      "The average person's eye can detect a resolution 60 dpi"

      At what distance? Citation please.

      Visual resolution is generally specified in angular units to avoid the viewing distance problem. However, average visual resolution at monitor viewing distances (remember the Retina display?) is considered to be 100 dpi. How would that translate to something and inch from your face?

    • The average person's eye can detect a resolution 60 dpi .. the Apple Vision Pro is around 35. That's way better than the Quest 2, but there's no way the picture can be crisp. It's possible teh eliminated the screendoor effect by getting rid of the space between pixels .. but I don't see how the image can appear crisp.

      This affirms Gruber's shill nature .. no wonder they let him try the product. Kissing ass pays off.

      Every review of Vision Pro remarks on the display quality. Every. Single. One.

      • LOL and you think these reviewers would ever get a piece of apple hardware again if they didnt fawn over this thing.
        • Funny, I remember many reviewers of many dud products Apple has released in the past still getting the ability to review and comment on newer things after trashing such products as the puck mouse, the screen-less iPod Shuffle, the Steve Jobs Vanity Computer a.k.a. the stupidly expensive Power Mac G4 Cube, iPhone 4 antenna-gate, the overly-expensive and completely ignored by everyone including Apple wastebasket Mac Pro, their useless and totally botched attempt at musical social networking named "Ping", the

  • Why is the spatial video double in size? They are capturing and storing two streams apparently. Why? Why don't they just store the delta and compress that? It seems really dumb to store two streams of images that are almost exactly the same. They should only store the difference (the stereographic information).

    • by dgatwood ( 11270 ) on Monday November 13, 2023 @01:14AM (#64001513) Homepage Journal

      Why is the spatial video double in size? They are capturing and storing two streams apparently. Why? Why don't they just store the delta and compress that? It seems really dumb to store two streams of images that are almost exactly the same. They should only store the difference (the stereographic information).

      Yeah, an ideal implementation would presumably encode the second eye as a series of I-frames in a second stream that somehow reference the most recent I-frame or P-frame in the main stream, using intra (full data) macroblocks only when there's low spatial correlation with the other version of the frame.

      But that would probably involve a lot of computational and data moving overhead, because the second encoder would have to be potentially doing P-frames derived from P-frames, not just I-frames, and choosing between three options (prediction from the other channel's I-frame, prediction from the other channel's P-frame which references the other channel's I-frame, or encoding the macroblock directly as an intra macroblock) instead of two, so you likely couldn't use an off-the-shelf encoder chip to do it.

      And since you presumably wouldn't be encoding an I-frame on that channel, you wouldn't be able to reference that chanel's I-frame even if doing so would save data, or else you'd have to encode an I-frame for the second channel and then set it aside and emit it only if it would save data compared with doing a predictive frame based on the original when averaged over the next couple of seconds (the time between I-frames), in which case you're encoding everything on the second channel twice, once with a cross-channel reference and once with a same-channel reference, then choosing between the two. And at that point, you might as well start doing crazy stuff like B-frames with all the overhead you're adding. :-D

      In other words, doing multiple streams on a bog-standard H.265 hardware encoder is likely way, way, way easier than doing a cross-channel encoder *well*, so I don't blame them for going that route. If we were talking about a RAW file format for stills, I'd have a very different opinion, but for video, separate channels is probably the right call.

      • Interesting that they claim spatial video is double in size. The MVC (multi view coding) extensions added to AVC/H.264 didnâ(TM)t produce dependent view substreams that were anything like the size of the primary stream. Thereâ(TM)s so much information that can be shared, and it could be 2D + deltas. At least that was my experience working with an encoder for stereoscopic Biu-ray Disc. This is in an interesting article: https://faculty.engineering.as... [asu.edu]. Apple are using HEVC, and it will be int

        • Interesting that they claim spatial video is double in size. The MVC (multi view coding) extensions added to AVC/H.264 didnâ(TM)t produce dependent view substreams that were anything like the size of the primary stream. Thereâ(TM)s so much information that can be shared, and it could be 2D + deltas. At least that was my experience working with an encoder for stereoscopic Biu-ray Disc. This is in an interesting article: https://faculty.engineering.as... [asu.edu]. Apple are using HEVC, and it will be interesting to see how they do it and which layer feature of HEVC they utilise. More interesting though: how are they getting the second view from an iPhone?

          Second camera, of course.

          No secret as to why you have to capture Spatial Video in Landscape Mode. Note the camera layout:

          https://www.notebookcheck.net/... [notebookcheck.net]

          • by Malc ( 1751 )

            Right, crop of x0.5 + x1 lenses. But I was thinking more about: what's the point? They're so close together that they can't delivery much difference. They're closer together than human eyes.

            • Right, crop of x0.5 + x1 lenses. But I was thinking more about: what's the point? They're so close together that they can't delivery much difference. They're closer together than human eyes.

              I think that computational photography is helping here.

              Obviously, it must be working.

            • Close one eye and move your head very slightly to the right or left. The image shifts detectably right? That's probably good enough for it to guess most of the nearby subject distances. Then it has the LIDAR to assist as well. As for what to "fill in", it probably uses AI to guess. It will be interesting to do left-right blink testing of the parallax shift on iPhone captured spatial video.

    • by dfghjk ( 711126 )

      Because they already have hardware that does one but not the other?

      • Presumably they didn't start working on the Vision Pro a week before they announced it. They've had plenty of time to make a hardware MVC codec/encoder for the iPhone.

    • Why is the spatial video double in size? They are capturing and storing two streams apparently. Why? Why don't they just store the delta and compress that? It seems really dumb to store two streams of images that are almost exactly the same. They should only store the difference (the stereographic information).

      Storage is cheap; processing power in a handheld battery-powered device; not as much.

      You really don't think Apple tried both ways?

      • More than that, they probably have some of their GPU guys working on a hardware encoder to do exactly this in a future M-series chip, so they can offload it onto that purpose-specific component to do the work at a fraction of the power cost of doing it on today's hardware. And then let the marketing boys start yapping about the space savings and increased quality while not making your phone get hot AF while draining the battery to capture 5 minutes of "spatial video" likely at higher resolution than we are

        • More than that, they probably have some of their GPU guys working on a hardware encoder to do exactly this in a future M-series chip, so they can offload it onto that purpose-specific component to do the work at a fraction of the power cost of doing it on today's hardware. And then let the marketing boys start yapping about the space savings and increased quality while not making your phone get hot AF while draining the battery to capture 5 minutes of "spatial video" likely at higher resolution than we are today.

          It's not like they're stopping at 1.0 with as-yet-unreleased hardware.

          Although I doubt that it is going to be causing much load on the SoC to capture 2 HEVC streams (which are already being hardware encoded), which is pretty much what they are doing; I would also imagine that they will endeavor to offload even more to hardware (if they haven't already).

          In fact, I wouldn't be surprised if the A17 Pro SoC, that is unique to the iPhone 15 Pro and Pro Max, already has that enhancement.

    • It's entirely possible that someone may want to work with the actual captured video rather than the output of processing. Kind of like why music producers and video producers keep the originals and masters rather than just the .mp3 or .mkv files.

      Lossy processes are lossy.

      • Yeah but it might be more lossy to stream two nearly identical streams. They could decrease the compression ratio.

  • > I can't describe it any better today than I did in June

    Time to find someone who can and let them do the review. I read his words and I don't know WTF he's babbling about. If he can't describe it, take the glasses off, change the Rx, and give it to someone who REALLY IS A TECH WRITER.

    God the annoying "influencer wannabe culture" is so stupid.

    • by dfghjk ( 711126 )

      SuperKendall has been auditioning for over a decade, he may not be a TECH WRITER but he is a /. poster. Of course, without John Gruber to copy-paste what could he say?

  • by haruchai ( 17472 ) on Monday November 13, 2023 @12:17AM (#64001427)

    I saw Attack of the Clones, supposedly the 1st major movie shot entirely with digital cameras, in a premium theatre, whatever the hell they're called or were called back then - AVX perhaps?, but not IMAX.
    I was struck by the clarity but the really weird thing was what happened when I left the theatre, by the side exit, moving quickly from near total darkness to a bright clear summer day, For about 15 seconds, the real world looked...grainy but I wouldn't say pixelated.
    Still it was the closest I've ever come to believing we might be living in the Matrix.
    I've never had a similar experience before or since.

    • by Anonymous Coward

      There is another way to describe what you're talking about. The "glass half full" description. That is, the real world looks sharp and the movie looked blurry. Which is actually what is happening.

      Like when I see a modern TV with its motion "enhancing" or whatever crap they have. It looks so weird like thick syrupy latency. It's clear but blurry at the same time. I hate it. I turn that crap off on my TV. If you go from watching that to the real world then yeah, the real world looks sharp (you say "grainy") a

    • by dargaud ( 518470 )
      The bug report to the matrix was quickly taken into account and the problem fixed.
    • by unami ( 1042872 )
      Maybe that was because they filmed it at HD resolution. You didn't notice the pixels in theatre, but they burned into your retina.
  • by Alworx ( 885008 ) on Monday November 13, 2023 @03:51AM (#64001641) Homepage

    I find it absurd that with all this advanced technologies we're still struggling with UTF! :-/

    What's an âOEiPhoneâOE?

  • Someone actually implemented Foveated Rendering!!

    I've read about it described as the next step for VR for so long, so I am really happy to see someone finally make it real.
    Suddenly, the price tag for the Vision Pro doesn't seem so far fetched. Not, that I can afford it, but if Apple did it, others will eventually copy it cheaper.

  • Where a studio engineer with his awesome speakers and equipment can make a recording sound absolutely phenomenal on them, but as soon as you listen to it on your crappy phone, it's total rubbish and worse than anything that came before because the mix was made for those awesome speakers and screw you, pleb, for not having them.

    Yeah, I'm so looking forward to these images that will probably even look great at equipment nobody who actually has a life outside of taking pictures will have but look even worse th

    • But, but, but, the vr headset is _only_ $3500!

      Maybe we can just declare vr a human right and everyone can get a headset for free.

      • The point is that something being astonishing, breathtaking and whatever is moot if 99% of the people will never experience it, and catering to that zero-point-whatever audience often means a worse experience for most others.

      • But, but, but, the vr headset is _only_ $3500!

        Maybe we can just declare vr a human right and everyone can get a headset for free.

        The first Sony CD Player was $1500.

        • And you might notice that cassette tapes still continued to exist and try to match the quality of the CD, as did the producers of audio because they knew that their market consisted of 99% tapes and vinyl records and only 1% CDs.

    • That's really not a good analogy at all, because there are plenty of people that DO have good audio setups. Even base trims of new cars are coming with decent audio setups that were unheard of outside of luxury brands 10 years ago, and bargain TVs have HDMI audio out / TOSLINK out connectors. Cheap-ass $40 earbuds have about the same performance as $250 headphones from 5 years ago. Even the bluetooth-connected "smart" speakers are pretty good at full-spectrum reproduction these days with the exception of

      • Ok, I'm far from an audiophile and I don't spend 1000 bucks on special cable for equally special people, but even I can notice the difference between the output of a cellphone-connected earbuds to that of noise canceling headphones connected to a studio sound card.

        The thing is that most people will rather use the former than the latter setup to listen to their music. Because most don't sit down at home to listen to some opera but rather need music on the go so they don't have to suffer the inane chitchat ab

        • Anyone that has a functional brain isn't spending a kilobuck on some bullshit cable either. Good news: you don't have to, because if it's an analog signal on a speaker wire, the signal is already amplified on an isolated circuit if the amplifier isn't a total piece of shit and isn't going to pick up any noise anyway unless you're running it a ridiculously long distance right next to an unshielded electric motor or refrigerator compressor or something. And since unamplified audio (and video) signals these

          • Could you reread what I wrote and reply again? Because answering this would feel like arguing with myself.

  • by MindPrison ( 864299 ) on Monday November 13, 2023 @04:31AM (#64001669) Journal

    It will wear out after a few weeks and no one will be filming anything like this, still because of the huge mask and inconvenience.

    We've already had this, I've had it numerus times with numerous gadgets, what sets Apple's version apart is probably that it uses a sort of lidar-ish 3D depth quick rendering in combination with a textured video overlay, with that technique you don't really need high count polygons to do that, cool - but it's still a novelty until that thing can be shrunk down to normal eyeglass size.

    LG had 3D recording on their LG 3D phone, it was actually kind of amazing, and the effect were there, you even had it right on the display on the phone itself, because it had a parallax screen, never took off - just like 3D televisions. It was a novelty.

    I've had (and still have) numerous 3D 360-cameras to record 3D - 360 videos, they don't have the Lidar scanning function that can make it possible to sort of "walk" around your filmed objects or people, but you still get the idea as if you were "there", until now the resolutions have been too low, and they are still insanely expensive if you want the good stuff like a good VR headset + good 3D cameras with appropriate resolution. Lidar scanning or 3D scanning + auto texturing as a video can solve this quite well, even with low processing so yeah it's fun, but only for a while.

    You won't be using it after a while, because it's simply too impractical. You won't take that headset with you on vacation, it's bulky, clunky, impractical - and the reason why Smartphones killed the video-camera + photography with regular cameras. People want a simple easy to carry item that they always have on them or with them.

    • Thanks to your comment, I looked it up and found https://en.wikipedia.org/wiki/... [wikipedia.org] 3D phones/tablets started with Sharp in 2002 and were continuously available ever since; one can currently buy a "Lume Pad 2" https://www.leiainc.com/lume-p... [leiainc.com] that does not use eyewear.

    • It will wear out after a few weeks and no one will be filming anything like this, still because of the huge mask and inconvenience.

      We've already had this, I've had it numerus times with numerous gadgets, what sets Apple's version apart is probably that it uses a sort of lidar-ish 3D depth quick rendering in combination with a textured video overlay, with that technique you don't really need high count polygons to do that, cool - but it's still a novelty until that thing can be shrunk down to normal eyeglass size.

      LG had 3D recording on their LG 3D phone, it was actually kind of amazing, and the effect were there, you even had it right on the display on the phone itself, because it had a parallax screen, never took off - just like 3D televisions. It was a novelty.

      I've had (and still have) numerous 3D 360-cameras to record 3D - 360 videos, they don't have the Lidar scanning function that can make it possible to sort of "walk" around your filmed objects or people, but you still get the idea as if you were "there", until now the resolutions have been too low, and they are still insanely expensive if you want the good stuff like a good VR headset + good 3D cameras with appropriate resolution. Lidar scanning or 3D scanning + auto texturing as a video can solve this quite well, even with low processing so yeah it's fun, but only for a while.

      You won't be using it after a while, because it's simply too impractical. You won't take that headset with you on vacation, it's bulky, clunky, impractical - and the reason why Smartphones killed the video-camera + photography with regular cameras. People want a simple easy to carry item that they always have on them or with them.

      You use your phone as usual (must use Landscape Mode) for Capture. Movies shot in Spatial Video will appear as normal 2D movies on regular displays.

      But when you get back home to your Vision Pro (or send the videos to your Parents to watch on their Vision Pro), they will display in their full Spatial Splendor.

      They really did think this through.

      • Pretty sure like all apple "innovations" another company did the thinking and apple just copied them
        • More accurately: another company did the thinking and implemented a good idea in a shitty and useless way, and Apple did a little more thinking and implemented it in a useable and useful way that people actually want.

      • They really did think this through.

        They did. You can create the photos and videos on your "inexpensive" phone, but only people who spend at least $3500 on another Apple device can enjoy them in their full glory. They did think it through, but that thought was regarding how to sell more units of one of the most expensive devices they've ever made.

        • They really did think this through.

          They did. You can create the photos and videos on your "inexpensive" phone, but only people who spend at least $3500 on another Apple device can enjoy them in their full glory. They did think it through, but that thought was regarding how to sell more units of one of the most expensive devices they've ever made.

          Actually, they support taking spatial videos straight from Vision Pro.

          And who doesn't have a cellphone?

          BTW, how would you propose they do this, without involving a stereo camera and playback device?

        • And just how would you have gone about creating content for a new display device you're working on, where there is exactly zero content available but you have already shipped hardware capable of creating the content by the millions of units?

          My guess is exactly what Apple did: ship software capable of creating the content to your phones today, and then magically there's millions of example videos available a year from now when you ship your new display thing, created by millions of users and uploaded to the

  • Is that another 'new' name for stereographic image/video?

  • Is it just stereoscopic video? I am prepared to believe that looking at a high definition video through some new dedicated headset is sharper and more immersive than looking at it on a vga monitor covered in dust.

  • by Cryptimus ( 243846 ) on Monday November 13, 2023 @06:46AM (#64001781) Homepage

    Wildly over-engineered expensive helmet produces results consistent with expensive helmets. Film at 11!

    Apple fanboys think everything is invented by Apple. The rest of us.... have had sex before.

    • This right here. I only recently saw a review saying that the Apple Vision Pro is 3 years ahead of the Meta Quest 3 in technology. I find that review to be rather damning. Considering it's 7x the price I'd have expected it to be more like 5 years ahead.

      The comparisons being made everywhere are asinine. What's the story here? The product is "astonishing" "breathtaking" "compelling"? No shit. That's the bare minimum I expect given the cost of the device. I expect more.

  • by unami ( 1042872 ) on Monday November 13, 2023 @07:43AM (#64001867)
    he's not a shill, but Apple can usually count on him to be unreasonable positive about them. He also called the ridiculously tiny updates to Final Cut Pro last week "best evidence that Apple remains keenly commited" - while it lags more and more behind with every substantial update the competition does.
  • Until the porn industry widely adopts any new digital visual technology you know it isn't going anywhere.

    I'll believe in cheap, usable, 3d whatever when it's all over the porn sites as a common viewing option.

"All we are given is possibilities -- to make ourselves one thing or another." -- Ortega y Gasset

Working...