Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Privacy Chrome Apple Your Rights Online

Nvidia Blames Apple For Bug That Exposes Browsing In Chrome's Incognito (venturebeat.com) 165

An anonymous reader points out this story at VentureBeat about a bug in Chrome's incognito mode that might be a cause for concern for some Apple users. From the story: "If you use Google Chrome's incognito mode to hide what you browse (ahem, porn), this might pique your interest. University of Toronto engineering student Evan Andersen discovered a bug that affects Nvidia graphics cards, exposing content that you thought would be for your eyes only. And because this only happens on Macs, Nvidia is pointing the finger at Apple."
This discussion has been archived. No new comments can be posted.

Nvidia Blames Apple For Bug That Exposes Browsing In Chrome's Incognito

Comments Filter:
  • by xxxJonBoyxxx ( 565205 ) on Thursday January 14, 2016 @12:33AM (#51298353)

    >> I didn’t expect the pornography I had been looking at hours previously to be splashed on the screen

    I think you're either doing it wrong or you're not looking at the right stuff. (Hours? Really?)

  • by Anonymous Coward

    You insist on having your own slow ass OpenGL implementation for our cards, I guess you fucked up on security too.

    • Re: (Score:2, Informative)

      by Anonymous Coward

      You insist on having your own slow ass OpenGL implementation for our cards, I guess you fucked up on security too.

      Patches from your proprietary GL implementation donated to the OpenGL Open Source project welcome, nVidia... don't bitch that it's slow when you're able to fix the slow yourselves.

      • Re: (Score:3, Insightful)

        by Anonymous Coward

        You insist on having your own slow ass OpenGL implementation for our cards, I guess you fucked up on security too.

        Patches from your proprietary GL implementation donated to the OpenGL Open Source project welcome, nVidia... don't bitch that it's slow when you're able to fix the slow yourselves.

        You don't understand. *Apple* insists on having their own OpenGL implementation for GPUs they use (so they have identical GL api support on intel, amd and nvidia). They don't use Nvidia proprietary driver code, nor Open Source code, and since they don't care about performance (because of metal), their implementation is slow-ass...

        Now get off my lawn ;^)

      • Apple are welcome to base their drivers around Nouveau and Mesa. :)

    • by Anonymous Coward on Thursday January 14, 2016 @03:03AM (#51298623)

      IOS saves screenshots of the applications for the task selector thingy and also for "fast" application switching where the screenshot is used for the zooming effect and as placeholder while the real application is still being (re)loaded. There is a separate screenshot for each orientation. It is possible that you launch or switch to the the browser or some other application and IOS will display a possibly very old screenshot of your private porn browsing session or some other private stuff that you had closed and purged from the logs ages ago. During the application switch effect the old screenshot is visible only momentarily but the same images can also be viewed from the task selector.

      1. Device at orientation A: open browser, enter private mode and browse for some pron.
      2. Switch to the home screen (screenshot it saved) and change to orientation B
      3. Go back to browser and close all pron tabs
      4. Switch to the home screen (screenshot is saved but this one is for orientation B)
      5. Change back to orientation A and enter the task selector or go back to the application. The old private browsing screenshot should be visible.

      • I've tried this twice and it doesn't work. iOS 9.2, Chrome. This used to be an issue in 7, maybe 8? Even without the rotation.
  • Except it's not. (Score:5, Insightful)

    by Anonymous Coward on Thursday January 14, 2016 @12:41AM (#51298365)

    This isn't just on Apple's OS. While I have nothing like Mr. Andersen's writeup to prove it, I've seen this kind of bug happen on Windows.

    • Re: Except it's not. (Score:5, Informative)

      by kthreadd ( 1558445 ) on Thursday January 14, 2016 @02:18AM (#51298535)
      I've seen it on GNU/Linux with Nvidia cards and their non-free driver for several years. This is not new and its not just Chrome.
      • by tibit ( 1762298 )

        This is a huge side-channel that can be exploited to communicate between "isolated" processes. You can e.g. have some malicious javascript working on an off-screen webgl context, inside of a sandbox, and it can use this to communicate with malware running elsewhere on the same host. This is pretty bad IMHO.

        • by mikael ( 484 )

          There was an article some time ago on how you could get an application to scan the USB devices for keyboards and other devices, figure out the location of the device driver in memory and the character buffers used by interrupts, then pass this information onto a CUDA or OpenCL application and do all the snooping on the GPU. The application containing the original USB scanner could then terminate.

      • I don't know if it still does, but the opensource radeon driver used to do something similar this as well... when you logged in to X. It all started when they committed the fix for the screen briefly displaying jumbled garbage when when logging in. Yeah, it didn't display garbled garbage on the screen, but showed a screen from an old browsing session instead, that was also cached somewhere on disk so was persistent even between full power downs.

        Hell Chromebooks did it as well when context switching between

      • I've seen it on GNU/Linux with Nvidia cards and their non-free driver for several years. This is not new and its not just Chrome.

        That is because OpenGL does not require NVidia to do the right thing. Most other driver do, but NVidia doesn't because it might impact performance, and they are not forced to doit. As far as I an parse their blame on Apple, they are basically saying Apple is not requiring them to blank textures either, so NVidia doesn't. It is Apple's fault for not forcing NVidia.

    • by tgv ( 254536 )

      Me too, and on tablets as well, in "Edge", no less. Usually it's very short lived. According to me the lesson is: applications need to erase all memory before closing a "private" session if the OS doesn't guarantee it.

    • Re:Except it's not. (Score:4, Interesting)

      by AmiMoJo ( 196126 ) on Thursday January 14, 2016 @06:59AM (#51299115) Homepage Journal

      Unlikely, because Windows does enforce clearing of newly allocated memory, including on the GPU. The drivers would fail WHQL certification if they didn't. The probably didn't bother on Mac OS either because of an oversight or to get a little more performance.

      It might be possible within specific apps if they mismanage GPU memory, but certainly not across apps as described in TFA. Well, unless there is some unknown bug, but Nvidia are saying there isn't and it is tested for WHQL certification.

      Gonna need to see some more evidence than an anecdote I'm afraid. All available evidence says that Windows is unaffected.

      • Unlikely, because Windows does enforce clearing of newly allocated memory, including on the GPU. The drivers would fail WHQL certification if they didn't.

        I see pieces of other applications while starting or quitting applications all the time through my nVidia driver, and sometimes those things are old. nVidia is definitely not scrupulous about clearing video memory on Windows.

      • by tibit ( 1762298 )

        Windows does enforce clearing of newly allocated memory, including on the GPU

        It doesn't. Really. It only clears the paged memory when pages change ownership or are first initialized. A set of pinned pages is retained by the graphics driver for as long as the driver wishes, and it's completely up to the driver to clear those when the logical ownership of the resource stored on any given page is assigned to a new process. Furthermore, it might not even be that a single page in the GPU memory belongs to one process only. It might store textures or other buffers from two or more process

    • by gsslay ( 807818 )

      And it isn't just Chrome either. In fact it has nothing to do with Chrome. It applies to any application you may have that might display content/information that you don't want to be randomly visible at a later time.

    • I've experienced the issue in multiple iOS revisions and devices as well

  • Simple explanation (Score:3, Insightful)

    by mcrbids ( 148650 ) on Thursday January 14, 2016 @12:47AM (#51298375) Journal

    So, your program allocates some memory. Should it initialize the memory to make sure it's all a bunch of zeros? Apparently, Nvidia doesn't think so.

    So, a program running on your OS requests some memory. Should the OS initialize the memory before handing it to the application? Apparently, Apple doesn't think so.

    Either answer is right.

    • Re: (Score:2, Informative)

      by Anonymous Coward

      It needs to be cleared by the OS 100%. The OS can't expect/assume this is done elsewhere, or stuff like this happens.

    • by Kjella ( 173770 ) on Thursday January 14, 2016 @01:43AM (#51298489) Homepage

      So, your program allocates some memory. Should it initialize the memory to make sure it's all a bunch of zeros? Apparently, Nvidia doesn't think so. So, a program running on your OS requests some memory. Should the OS initialize the memory before handing it to the application? Apparently, Apple doesn't think so. Either answer is right.

      Not really. An application will typically allocate and release memory all the time, being forced to clear it every time is massive overkill and a performance problem. The driver exposes the GPU memory, the OS allocates it to applications just like with RAM. It's the only one that knows when memory switches application context and must be cleared. So there's really only one sane solution.

      • by dgatwood ( 11270 ) on Thursday January 14, 2016 @02:08AM (#51298523) Homepage Journal

        Not really. An application will typically allocate and release memory all the time, being forced to clear it every time is massive overkill and a performance problem. The driver exposes the GPU memory, the OS allocates it to applications just like with RAM. It's the only one that knows when memory switches application context and must be cleared. So there's really only one sane solution.

        The usual solution is basically:

        • Whenever you add a new page into an application's address space, you map a zero-filled page as copy-on-write. If the page never gets touched, it is zero-filled, and you take the performance hit only when it ceases to be all zeroes.
        • Small allocations are allocated using a pool allocator backed by those pages.

        This works well as long as the CPU is in charge, ensuring that any dirty data must have originated in some other part of the app (by reusing a pool region). Where it starts to get hairy is when you have a GPU that has access to all of RAM and uses a separate page table with separate COW flags, etc.

        I'm not certain what went wrong in this particular case. However, I do remember a really annoying change in about 10.6 or 10.7 where Apple stopped using a vertical blanking interrupt to control various aspects of the GPU's operation and maybe some other parts of the OS. This improved battery life, IIRC, but the result is that you'll often see the GPU draw a frame of video before the previous contents of VRAM have gotten wiped. I would not be at all surprised if that was what happened here.

        As for whose responsibility it is to clear the memory, my gut says that if Chrome wants to guarantee that its video buffers are cleared, Chrome is responsible for doing it. Otherwise, it should assume that VRAM is a shared resource, and anything it puts in VRAM can potentially be accessed by any other app at any time for any reason. With that said, I'm open to other opinions on the matter.

        • by BoberFett ( 127537 ) on Thursday January 14, 2016 @04:02AM (#51298765)

          It's not just about a moment of graphical corruption, that's an annoyance. But a process being able to access the RAM leftovers from a previous process is begging for memory based attacks. Even though it's on the GPU, it's a vulnerability. What's to say that GPU wasn't just displaying banking info? The OS should not assume the application is friendly and blanking the VRAM. That security is on the OS.

          • by dgatwood ( 11270 )

            It's not just about a moment of graphical corruption, that's an annoyance. But a process being able to access the RAM leftovers from a previous process is begging for memory based attacks. Even though it's on the GPU, it's a vulnerability. What's to say that GPU wasn't just displaying banking info? The OS should not assume the application is friendly and blanking the VRAM. That security is on the OS.

            In principle, I agree with you. In practice, though, this sort of bug is really easy to make when designing

        • by x0ra ( 1249540 )
          I fail to see how this can increase security, as you just need to fault by writing a single byte and you've got a mostly dirty page.
          • by dgatwood ( 11270 )

            Copy on write does not work that way. It copies the originally mapped page, which is a single pre-zeroed physical page that is kept around specifically for that purpose. That copy operation completes (thus wiping the victim physical page) before the OS returns control to the process.

      • by AmiMoJo ( 196126 )

        Clearing allocated memory before handing it to applications is required by POSIX and generally a really, really good idea from a security point of view. The performance hit is minimal (modern CPUs and RAM can write gigabytes per second easily) and can be mitigated, e.g. by using a pre-allocated pool for small allocations.

        All modern operating systems clear allocated memory. It's basic security to stop one app stealing data from another, or even worse the kernel. You could do it the other way around and try t

        • by tibit ( 1762298 )

          This of course is true, but what is handled by the graphics driver is not memory, but OpenGL primitives like buffers. The OS has absolutely no bearing on any of that, it's up to the driver to do the right thing. Just as a graphics driver can pass unclean buffer primitives around, a serial driver bug might not clear a communications buffer, or a fancy keyboard driver with OLED displays in the keys might leak the data from your porn setup where the function keys have pics of your fave pornstars on them :)

          Basi

      • So, your program allocates some memory. Should it initialize the memory to make sure it's all a bunch of zeros? Apparently, Nvidia doesn't think so. So, a program running on your OS requests some memory. Should the OS initialize the memory before handing it to the application? Apparently, Apple doesn't think so. Either answer is right.

        Not really. An application will typically allocate and release memory all the time, being forced to clear it every time is massive overkill and a performance problem. The driver exposes the GPU memory, the OS allocates it to applications just like with RAM. It's the only one that knows when memory switches application context and must be cleared. So there's really only one sane solution.

        No. The driver knows as well. There is a concept called OpenGL contexts, and they can be configured to share texture data with eachother, the problem is that the driver leaks texture-data between contexts that shouldn't be sharing texture data. They perfectly well know those contexts should not be sharing texture data.

    • There is a reason why C has memory allocation functions like alloc, malloc, calloc, nalloc ...
      and there is a reason why C++ says: CTORs are only called if the class defines a CTOR.

    • All modern OSs will initialize the memory because there is a clear security issue with allowing one application access to the old contents of a random block of memory. It could contain passwords or who knows what else.

      On the other hand, GPU memory is primarily used for rendering graphics. The security implications are less severe if information leaks. Has there ever been any guarantee information won't leak? So why do users assume that it won't? It is likely NOT cleared for speed reasons. Everyone wants a f

    • Arguably, when you request memory from a modern OS, basic security says it shouldn't be filled with random stuff from other programs.

      This has been true since at least the 90s.

      Multi-processing environments have been solving this for years, like they're supposed to.

  • Does that mean I have to throw away my porn iPad and go back to my porn ChromeBook?

    I hate that. Just moving the bookmarks will take forever.

    • by SeaFox ( 739806 )

      Does that mean I have to throw away my porn iPad and go back to my porn ChromeBook?

      I hate that. Just moving the bookmarks will take forever.

      Joke Fail.
      You're using Chrome on both, so bookmarks are synced through your Google account.

      • Erm, is actually anyone using that feature?
        I for my part don't I like it that different devices have different book marks. The bookmarks I really want to share are on delicious anyway.

      • by lucm ( 889690 )

        You'd really put porn bookmarks in the cloud?

        I don't use Chrome sign-on ever, even for regular browsing.

        • by SeaFox ( 739806 )

          I don't use Chrome sign-on ever, even for regular browsing.

          If you're that paranoid, you shouldn't be using Chrome to start with.

          • I'm not paranoid, it's based on an unpleasant incident.

            Two years ago, many of my friends complained that they were receiving spam from one of my Outlook.com email addresses. It was weird because it was not the sign-in address for my Outlook.com account; the spam was sent using one of my aliases that I used only with a Google account for non-important stuff (Chrome, Youtube, Google search preferences and such but no Gmail) on one specific machine.

            I didn't know how this happened, so I turned off that laptop (

  • by slacka ( 713188 ) on Thursday January 14, 2016 @01:27AM (#51298455)

    I've done some GLSL programming and it's not unreasonable for clearing a GPU buffer to take 1/20 to 1/10 the time as the actual operation on that buffer. How many Nvidia users (read gamers) would prefer to take a 5% performance hit to prevent occasional glitches like this?

    This has absolutely nothing to do with Nvidia's drivers. It is a glitch in Diablo III and maybe something Chrome could address for the paranoid out there. Meanwhile, if you're really that worried about someone seeing a glimpse of your porn hours earlier, just turn your computer off/on before allowing anyone to use it next. Problem solved.

    • by afourney ( 2183166 ) on Thursday January 14, 2016 @01:47AM (#51298495) Homepage
      It's not a 5% hit. You only have to clear the buffer once on exit. And, Nvidia is right:This is something the OS should do (just like it closes filehandles, and frees other resources on exit). Why not leave it up to the app? Because, apps don't always exit cleanly.
      • "You only have to clear the buffer once on exit."

        One of the cases I've heard of this is during a crash. In that case, you may have no clean exit in which to clear the buffer.

      • by mikael ( 484 )

        That would mean that the application has to keep allocated every texture, framebuffer and memory block ever reserved by the GPU. If anything, the GPU is going to have to maintain a "must-be-wiped" list of memory blocks that are cleared when the application is closed or when they are reused.

    • by _merlin ( 160982 ) on Thursday January 14, 2016 @01:48AM (#51298497) Homepage Journal

      The thing is, for security the operating system should scrub memory before before supplying it to an application. Otherwise you get all kinds of data leakage. The virtual memory system does this when an application requests more pages. SPARC CPUs generate an interrupt when they run out of "clean" register windows. NTFS ensures sectors that are allocated but not written in a files read as zeroes (FAT32 on Windows 95 didn't, you'd read back whatever was there on the disk). By the same token, the OS should scrub GPU resources before supplying them to an application. You don't need to do this on every allocation, only when the allocation comes from RAM that was not previously assigned to that application.

      • by AHuxley ( 892839 )
        Think of all the apps running at the same time that could be looking in at something thats still open.
    • If you're calling this a bug in Diablo, you are kind of missing the point. If the data is left in memory, another process on the machine would be able to retrieve that data for malicious purposes. An actual exploit of this vulnerability would be quite tricky, but it does end up happening. This is really a simpler version of the famous Heartbleed bug. What we've seen returned in *this* case is just some graphical data, but there's no reason to believe that it couldn't be something like a private key that
    • Clearing the buffer on app context request or at context release is a one time event.

      And no, it doesn't take long, it is in fact the quickest way to touch every pixel. Anyone who's telling you it takes too long is using the wrong API to do it.

  • Blame Chrome (Score:5, Interesting)

    by pushing-robot ( 1037830 ) on Thursday January 14, 2016 @01:35AM (#51298465)

    Chrome advertises its Incognito mode as leaving no traces behind. Therefore, it should be responsible for wiping its framebuffer, just as it clears caches, cookies and history. It's like writing a file shredder that doesn't actually overwrite files, then blaming the OS and hard drive manufacturer for the oversight.

    It might be nice if framebuffers and such were zeroed on release, but like overwriting files, it's a time/energy/security tradeoff. Besides, the screen isn't really protected anyway; IIRC applications on most OSes can capture the screen without even admin privileges. After apps are sandboxed into seeing only their own windows we can talk about securing the framebuffer.

    • This, this, this! (Score:5, Insightful)

      by tlambert ( 566799 ) on Thursday January 14, 2016 @02:25AM (#51298555)

      Chrome advertises its Incognito mode as leaving no traces behind. Therefore, it should be responsible for wiping its framebuffer, just as it clears caches, cookies and history. It's like writing a file shredder that doesn't actually overwrite files, then blaming the OS and hard drive manufacturer for the oversight.

      This, this, this!

      If it's incognito, it should not trust anyone else to ensure the privacy of the user's data, not even the OS. We already know that it's possible to use CPU cache bugs as a covert channel to snoop on other processes running on your computer; if the application claims to maintain security, it needs to zero the memory itself.

      As an aside, a GPU is a better machine for zeroing pages than the main CPU, and won't pipeline stall or time stall the main CPU by doing it, and GPUs are traditionally really good at manipulating large amounts of memory. So one has to wonder: why doesn't nVidia expose a primitive that Chrome can then use to zero the pages of a frame buffer, before or after it is used?

    • Yeah, seems to me incognito mode should do this as its shutting down a tab 'to remove all traces'. That said, apps should probably zero out before they start as well since they are the ones that look unkempt when they display legacy data.

    • Re: Blame Chrome (Score:4, Insightful)

      by guruevi ( 827432 ) on Thursday January 14, 2016 @03:11AM (#51298633)

      There are also some limitations to what a program promises vs what it can do. File shredding is an optimal example: modern SSD do not even write to the same physical location every time you write to the same file. Battery backed controllers fool the OS in thinking a certain action was completed while it really wasn't committed to disk yet. If you pull a disk between the shredding event and the cache flush, you could easily read things. Heck, if your magnetic drive says a portion of it's drive are "bad blocks" the data on those blocks doesn't get overwritten, SSD's have cells that can physically go "read-only", with the right tools you can read the data in the "bad blocks" or "read only" cells.

      • by stikves ( 127823 )

        SSD's have a very good secure erase mode, but it is very low level. I had to do it once, when I forgot the password on my Samsung portable SSD. Basically the drive sends a concurrent pulse to all cells, and then drains them (that's what I understood). It took a very short time, and since it happens to the entire drive, and the initial data was encrypted anyways, I don't think any data woul dbe recoverable after that point.

        But this is not an advertised feature, and I had to speak with the customer service to

        • by guruevi ( 827432 )

          It wouldn't clear any blocks that are so worn out that they've gone read only would they? There are states in SSD's where the cells individually have become ROM's

    • The issue isn't another app seeing the framebuffer used by Chrome incognito. That's probably harmless. The issue is that, if the pages aren't being zeroed properly, there may be other sensitive information that leaks. What if somebody comes up with GPU accelerated TLS? Sure they *should* zero the pages too. The problem is that there are *a lot* of places where applications would need to zero memory. And it turns out that it's way too easy to screw this up. Hence the feature has been moved to the oper
    • by mjwx ( 966435 )

      Chrome advertises its Incognito mode as leaving no traces behind. Therefore, it should be responsible for wiping its framebuffer, just as it clears caches, cookies and history. It's like writing a file shredder that doesn't actually overwrite files, then blaming the OS and hard drive manufacturer for the oversight.

      Copy/paste from Chromes incognito mode. The emphasis is theirs.

      Pages that you view in incognito tabs won't stick around in your browser's history, cookie store or search history after you've closed all of your incognito tabs. Any files that you download or bookmarks that you create will be kept. Learn more about incognito browsing [google.com]

      Going incognito doesn't hide your browsing from your employer, your internet service provider or the websites that you visit.

      So they dont advertise that it leaves no traces behind. In fact it's quite obvious that it does leave things behind.

  • by endangeredcritters ( 1026216 ) on Thursday January 14, 2016 @01:55AM (#51298505) Homepage
    There is a far simpler way to defeat chromes incognito mode, just use it for awhile. After some unknown (not forever) period of use, it will start to not forget history even after it's been shutdown and restarted. At least in 'Version 44.0.2403.107 (64-bit)' running in Linux Mint.
    • Nice try. I've been caught on that "watch this GIF long enough and something amazing will happen" jokes before. You won't fool me again!

  • I've got an older GTX 760 running on an HP Z820. I run ubuntu on this thing and use nvidia-352 drivers. When I log out of gnome3 and log back in through lightdm, I see the same exact symptoms. I can see what was previously displayed on my framebuffer, including firefox and chromium windows.

  • *opens link*

    ... pornography ... splashed on the screen ...

    *closes window*

  • by Anonymous Coward

    If you enable 3D mode in your VirtualBox VMs, then you will see the screen buffer contents when you reboot them, Tried this with RHEL7 guests on Linux host, the host has NVIDIA card.

    Also, with same NVIDIA setup, if I boot from RHEL into Ubuntu, I can see the RHEL screenbuffers in Ubuntu when logging into the desktop.
    NVIDIA isn't clearing the buffers properly.

    • This is Nvidia being Nvidia. They fucked up in their drivers, AGAIN, and are buying time by pointing blame and spewing nonsense until a new driver comes out next week that randomizes frame buffer addresses for different applications. Then they will crow about how they fixed someone else's problem, because they are such nice people.

      This story is 60% bullshit, with 40% slashdot dupe added in, as we already saw this one on Windows earlier in the week.

  • by dlingman ( 1757250 ) on Thursday January 14, 2016 @08:05AM (#51299295)

    There are two real issues here.

    The first is that malicious programs could open up, grab screen buffers, and get access to stuff that had been on the screen to use for their nefarious purposes.

    This is bad, and unless we get decent support to isolate the frame buffers (and other graphic memory) between apps at either the driver or hardware layer, it's not going away anytime soon. Dont want this? Power cycle (all the way off - not just hiberante) between application launches would do it.

    The second is sloppy programming on the part of non-malicious applications. That's what is being talked about in the application. Diablo apparently asked for a frame buffer, and then presented it, as is, to the user without putting what it wanted in place, trusting for it to be in a particular state. Which it wasn't.

    You want a black screen to show to the user, then write zeros into your buffer before you show it to the user. Decent compilers/languages will tell you if you've tried to read from unitialized variables, and you should never trust that anything you've asked for dynamically is in a safe state, unless you've explicitly requested that it's cleared before being handed to you. Why should a resource from the graphics card be treated any differently?

    NVidia is right about one thing here - most of the time, nearly all of the time, the thing you do with that buffer you're given is to write your stuff into it, completely overwriting it, and it would slow things down if they had to guarantee that it was cleared before handing it out to you. If your program doesn't care enough to do so itself, that's not really their fault.

    It would be nice if, on program exit, all GPU resources used by that app were flushed, but again, that would involve the OS needing to be told of all the GPU resource allocations and deallocations so it could clean up properly, and that too would probably slow things down. Not a lot, but enough to be annoying when your game stutters.

  • As far as I'm concerned, it's *everyone's* fault. What we have here are a bunch of companies that are playing an immature pass the buck game.

    Chrome's incognito is supposed to be secure. Wouldn't any reasonable person expect a wipe of used VRAM to be included as part of cleanup process when an incognito window is closed? I know I would. But they don't, because they expect it to be handled by the driver.

    NVidia's driver should be wiping memory that has been released by the calling app. It's *their* driver

  • Isn't it very likely that users would have 'regular' Chrome running almost all the time and periodically open up incognito tabs to do banking or just browse pr0n. Once finished, they would close the incognito tabs/windows but would most likely keep Chrome itself running for a good while longer.

    Another use case is working in MS Word on two documents at once. One is top secret, the other is not, you finish with the top secret one and close it, but you keep working on the other document, keeping Word open.

    In

  • I've seen this, and have been able to reliably reproduce the issue with mplayer, many many times.

Your password is pitifully obvious.

Working...