Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!


Forgot your password?
Portables (Apple) Software The Internet Upgrades

Next-Gen JavaScript Interpreter Speeds Up WebKit 193

JavaScript is everywhere these days. Now WebKit, the framework behind (among others) Safari and Safari Mobile, as well as the yet-unreleased Android, is getting a new JavaScript engine called Squirrelfish, which the developers claim provides massive speedups over the previous one. The current iteration of the engine is "just the beginning," they claim; in the near future, six planned optimizations should bring even greater speed. With JavaScript surviving as a Web-page mainstay despite many early gripes, and now integral to some low-powered mobile devices, this may mean many fewer wasted seconds in the world.
This discussion has been archived. No new comments can be posted.

Next-Gen JavaScript Interpreter Speeds Up WebKit

Comments Filter:
  • iPhone Safari (Score:3, Insightful)

    by ryanguill ( 988659 ) on Tuesday June 03, 2008 @02:51PM (#23641869) Journal
    I cannot wait to get this on my iPhone. I would like to see some more in depth information about how this compares to tamarin though. If it truly is better than tamarin, I wonder if mozilla would consider swapping it for squirrelFish. As an aside, that is an awesome logo...
    • Just for the hell of it, I've got Firefox 3 RC1 running on an ancient Toshiba Libretto 110CT with 64MB RAM running W2K on a Pentium-MMX 233. Looking at JS benchmarks online, with Firefox 3 (presently) leading the way, I figured it was worth a try... FF3 is way more usable than Firefox In fact, the full version of GMail actually runs on the Libretto! (Firefox 2 would go into JS hell with the CPU pegged at 100% for 10-20 seconds at a time...)

      One can only hope that we could squeeze some more JS pe
    • by ianare ( 1132971 )

      As an aside, that is an awesome logo...
      Meh. I think the real thing [wikipedia.org] is much cooler.
    • It'd also be fun to compare the benchmark for a number of different machines, browsers, and OS/framework builds. (any overclocked iPhones out there???)

      On a 2 GHz MacBook Core Duo with OS X 10.5.3 (Webkit 5525.18 4/20/08 shows in profiler)

      Firefox 15205.0ms +/- 1.3% -- score about 4 runs per minute
      Safari 3.1.1 (5525.20) 4149.6ms +/- 0.5% -- score about 15 runs
      iCab 4.0.1 4142.2ms +/- 0.4% -- score about 15 runs

      Essentially identical results suggest that iCab 4 is also using Webkit.
      It looks like my tes
      • Re:iPhone Safari (Score:5, Informative)

        by buckhead_buddy ( 186384 ) on Tuesday June 03, 2008 @10:33PM (#23646887)

        On a mac, it's simple to install and remove the WebKit nightly. It's literally just dragging and dropping a specially built application.

        1. Make sure you have the latest Safari installed. WebKit doesn't touch the User Interface, so you still need Safari around.
        2. Go to the Webkit Nightly Builds [webkit.org] site and click to download the Mac OS X version.
        3. If you have "Open safe downloads" wisely turned off, you will need to find the file you downloaded (probably named WebKit-SVN-r#####.dmg) and open it. The disk image will mount and you will see a gold version of the Safari compass icon labelled WebKit. If your browser auto-opens "safe" downloads, just switch to the Finder and you'll see that gold WebKit icon all alone in a window.
        4. Drag the gold WebKit icon into your Applications folder. It will not conflict or erase Safari since it has a different name. You are now done with the install image; you can eject and trash the .dmg file from your download folder.
        5. To use the nightly builds of webkit, launch the gold WebKit app rather than Safari. The first time you will be warned by Mac OS X's security feature saying this was an app downloaded from the internet, go ahead and approve the launch. You may also be warned about the incompatibility of some browser plugins. Everything else should seem identical to Safari.

        Now, you'll only be using the webkit libraries when browsing with that gold WebKit icon. To prove this to yourself, you can visit the Acid3 test [acidtests.org] page using both Safari and Webkit without quitting either and see very different results. Safari still has major incompatibilities while WebKit seems almost perfect.

        Finally, when you are ready to uninstall WebKit, quit the app and drag the gold colored icon from the applications folder to the trash. Or, drag a new version that you download the next day on top to replace the old nightly.

  • by AKAImBatman ( 238306 ) <{akaimbatman} {at} {gmail.com}> on Tuesday June 03, 2008 @02:53PM (#23641915) Homepage Journal
    ...how does this compare to Tamarin [mozilla.org]? With Javascript running for longer periods of time, a runtime-optimizing JIT seems to make a lot of sense. SquirrelFish's optimized bytecode engine sounds interesting, but I can't help but feel that it's going to fall short in the next-gen race for Javascript performance.

    Of course, anything that improves JS performance in browsers (making some of the libraries faster and/or hardware accelerated always helps... hint, hint!) is a win for the consumer. And from that perspective this sounds very interesting. :-)
    • by samkass ( 174571 ) on Tuesday June 03, 2008 @03:02PM (#23642071) Homepage Journal
      According to this link [satine.org], the SquirrelFish in the latest nightly build (without the extra optimizations) can already compile *and* run the source code between 1.08x and 1.94x as fast as Tamarin when Tamarin is just running pre-compiled code. It's fast.
      • by Anonymous Coward on Tuesday June 03, 2008 @03:13PM (#23642233)
        It feels like Safari is moving so incredibly quickly. Webkit 3.1 already felt around twice as fast as webkit 3.0 in terms of javascript execution; now SquirrelFish is around one and a half times as fast again... in what's basically its first stable implementation. And they're already targetting optimisation points, and it's already caught up to Tamarin (and iirc webkit 3.1 is at least on par with firefox 2/3). Absolutely amazing.

        The iPhone is the one to really benefit from this, because it's where the pauses are currently noticeable.

        And IE really, really suffers in comparison. Microsoft has to be wincing about all this, if only for pride's sake... I'd love to see speed improvements to IE 8 beyond the what's known already [ejohn.org], though the DOM speed improvements will help a lot for parity.
        • by Jugalator ( 259273 ) on Tuesday June 03, 2008 @03:58PM (#23642863) Journal
          I agree, Safari for Windows is actually a bit tempting, especially if "hacking" it a bit (actually, it can probably not even be called "hacking" in geek circles at least) to use the latest WebKit builds. The only downside of that one is its (sorry..) piss poor memory performance. It's worse than pretty much anything I've tried. A few hours browsing and I had it use 300-400 MB RAM. That's like the bad old Firefox 2 days at worst, from my experiences. It's worse than IE 7 too.
        • by moderatorrater ( 1095745 ) on Tuesday June 03, 2008 @04:01PM (#23642901)
          I see it as an indicator of exactly how bad the previous js interpreters have been.
          • by TheRaven64 ( 641858 ) on Tuesday June 03, 2008 @07:39PM (#23645541) Journal
            To give some context to your remark:

            The fastest dynamic language implementation at the moment is Objective-C. This isn't as fast as it could be in the GCC implementation (I'm working on it in LLVM, and hope to have some nice speedups soon), but it's not far off. This compiles dynamic lookups to fast native code and compiles everything else to static code.

            Next up in terms of speed are the various Smalltalks. Something like GNU Smalltalk does well in terms of speed. It JIT compiles things in a way quite similar to Objective-C, but with a very naive compiler with very little optimisation. This is slightly faster than something like Squeak, which compiles to bytecode and interprets this. Bytecode is generally quite fast. Basically, the interpreter is a loop which contains a switch statement. Each instruction is a fixed size with (typically) the first byte being an opcode, and so this is compiled to a static jump table and the 'decode' cost for each instruction is a load and a jump. This is slightly slower than naive compilation, but not much. In some cases it can be faster, because your interpreter is likely to be compiled with a compiler that does quite aggressive optimisation and so you may get fewer register spills than something like GNU Smalltalk (which lacks a peephole optimiser, for example). The new JavaScript interpreter is of this nature - it compiles to bytecode and then interprets this.

            Even slower are interpreters which try to run the AST directly. These turn each statement (or part of a statement) into a generic operation and step through these. This is what the old WebKit implementation did.

            Part of the problem with this kind of thing is that you are always trading compile time for run time. The benchmarks they published take around 10 seconds to do a complete run. This includes both your compile and run time. If you take more than a second to compile, people will notice. If you take one second to parse, and then execute the AST slowly, you may take 10 seconds in total. If you take one second to parse and four seconds to compile and optimise then you need to get a 500% speedup just to break even - your perceived speed will be the same. This is why things the StrongTalk and Self VMs did dynamic profiling - they began by interpreting everything, but if you spent a lot of time executing a part of the code, it will aggressively optimise it. The newer Java VMs do the same thing.

            For most JavaScript, this is a complete waste of time. It is very rare for JavaScript to run for more than a fraction of a second at a time and so latency caused by interrupting execution much worse than just running it slightly more slowly. The ActionScript VM in Flash is very different, since it is designed to run scripts that stay running for minutes at a time and are fairly CPU intensive. If people start using JavaScript and canvas tags or SVG in the same way as they use Flash, then a JS runtime with a JIT compiler and optimiser is likely to be a win.

            • by cryptoluddite ( 658517 ) on Tuesday June 03, 2008 @11:23PM (#23647217)

              they began by interpreting everything, but if you spent a lot of time executing a part of the code, it will aggressively optimise it. The newer Java VMs do the same thing.
              Newer as in since 1.2... ten years ago.

              I have to disagree with you about Smalltalk. Before it in performance are LISP and on the great benchmark shootout even LuaJIT (!) is faster than VisualWorks smalltalk -- vw is pretty fast for a smalltalk. Smalltalk has been optimized a lot, but it's not really a 'fast' language.

              For most JavaScript, this is a complete waste of time. It is very rare for JavaScript to run for more than a fraction of a second at a time and so latency caused by interrupting execution much worse than just running it slightly more slowly. The ActionScript VM in Flash is very different, since it is designed to run scripts that stay running for minutes at a time and are fairly CPU intensive.
              JavaScript will be the main programming language in the next decade at least, imo, and improving execution speed of JavaScript is never a waste of time. The faster it is, the more it will be used.
              • by TheRaven64 ( 641858 ) on Wednesday June 04, 2008 @08:02AM (#23649825) Journal
                I forgot to count Lisp - SBCL really sets the standard for how fast dynamic languages ought to be. You are right that LuaJIT is faster than Smalltalks (I'm really surprised it's faster than VW, although looking at the benchmarks it seems performance is pretty close, except for two cases where VW does really badly. Compiling Smalltalk is harder than Lua, because all of the flow control is done via calls to closures (an if statement in Smalltalk is done by sending an ifTrue: message to an expression with a closure as an argument), while Lua has explicit flow control, so I'm not entirely surprised. My point was not to compare specific languages, but rather execution techniques (JIT vs Bytecode vs interpreted AST).

                JavaScript will be the main programming language in the next decade at least, imo, and improving execution speed of JavaScript is never a waste of time.
                Oh, I completely agree. JavaScript certainly isn't my favourite language, but it's a lot nicer than any of the other mainstream languages at the moment. My point is not that it's not worth optimising, it's that many traditional optimisation approaches will have a negative impact on user experience. A lot of JavaScript 'programs' at the moment have a tiny CPU usage and complete very quickly. The test of the new implementation took 2 seconds to complete with a bytecode interpreter. In this kind of situation, the rules are very different.

                Compare something like TCC to GCC. TCC can compile and run a short program in two seconds, although it does hardly any optimisation. GCC will still be in the middle of running optimisations by the time TCC has finished running the program. On the other hand, compare a server-side component of a web app. GCC may take a few minutes to compile it while TCC would take a few seconds, but the version compiled with GCC will be able to service twice as many clients on the same hardware. Something like Google Spreadsheet, or browser-based games, are the same. They use enough CPU power that adding a little compiler overhead for better code is a net gain. Things like dynamic menus and the Slashdot background comment loader aren't - they run fast enough on a crappy JS implementation, and on a faster one they are speed-limited by the network, not the CPU. For these, a fast bytecode interpreter will always be (or, at least, seem) faster than an aggressive JIT, because the time they spend executing is so small.

            • Re: (Score:3, Interesting)

              by samkass ( 174571 )
              [citation needed]

              I try not to be a Java apologist on Slashdot, but the latest JDK6 (especially in -server mode) and public builds of JDK7 are awfully fast. Considering Objective-C's dynamic lookup overhead and lack of inlining opportunities, is it really much faster? Especially when you're running it on a chip architecture other than the one it was compiled for, such as moving between some of the Atom, Core, AMD, and virtual machines. (Although I suspect the LLVM stuff will bring Objective-C up to Java p
        • Re: (Score:2, Insightful)

          by JasonB ( 15304 )
          W.r.t. the performance of a browser's JS and HTML engines: A browser is much more than the sum of its fundamental rendering technologies. If performance were the most important driver of customer adoption, we all would have switched to using Opera years ago. But each time I try to move away from Firefox, I end up moving back because of either:

          A. A site compatibility problem.
          B. A FF plugin that I cannot live without.

          In a perfect world, alternative Linux browsers would provide support for FF plugins, but the
        • by sznupi ( 719324 )
          Quick development was apparently one of the reasons Apple choose KHTML over Gecko.

          At least that Jamie Zawinski ( http://en.wikipedia.org/wiki/Jamie_Zawinski [wikipedia.org] ), heavily involved in Netscape/Mozilla during early years, seems to agree...

          http://jwz.livejournal.com/132696.html [livejournal.com]
          http://jwz.livejournal.com/138051.html [livejournal.com]
          • by sznupi ( 719324 )
            And in light of all this it's interesting that Tamarin was actually _donated_ to Mozilla by Adobe... ("you can't do it properly by yourself so here, use this, dammit" crossed my mind few times...)
      • by ToLu the Happy Furby ( 63586 ) on Tuesday June 03, 2008 @04:01PM (#23642893)
        Faster than the current Mozilla builds of Tamarin, which have not been optimized and in many situations are still slower than SpiderMonkey.

        On the other hand, it bears noting that Tamarin isn't going to make it into shipping code until Mozilla 2.0--presumably that will be Firefox 4, but it's still so far off that I believe that determination hasn't even been made yet. Whereas on past experience I would expect SquirrelFish to be in a shipping Safari build much sooner.
  • by the donner party ( 1000036 ) on Tuesday June 03, 2008 @03:02PM (#23642065)
    What's next? rabbitelephant? curvaceous coelacanth? fishfishfishrabbitfish? We desperately need an automatic "hip open source software name generator" before someone gets hurt.
    • Re:squirrelfish? (Score:4, Interesting)

      by Anonymous Coward on Tuesday June 03, 2008 @03:10PM (#23642185)

      squirrelfish? What's next? rabbitelephant? curvaceous coelacanth? fishfishfishrabbitfish? We desperately need an automatic "hip open source software name generator" before someone gets hurt.

      Usually what happens is a development team uses themed codenames to easily distinguish product versions. In this case, they're probably using fish names. I worked somewhere with the same theme. We had all sorts of fish names and eventually they got a bit wacky (aquaman). The problem is when in OSS or other open development models, those names become public instead of a properly chosen name. With OSS, this happens because name recognition builds up as people discuss the unfinished software. We only had this happen once, where Man-O-War was demoed for the defense department and they liked the name so much we had to keep it for PR reasons.

    • Re:squirrelfish? (Score:4, Informative)

      by BenoitRen ( 998927 ) on Tuesday June 03, 2008 @03:12PM (#23642229)

      SeaMonkey [seamonkey-project.org], of course. :)

      I made a Mozilla product name generator [skynet.be] a half year back.

      • Cosmic Cat Creations' FireSomething plugin [cosmicat.com] is similarly fun. Only does not (yet) install on Firefox 3.
        (Haven't checked if there's some incompatibility, or if a simple change in the maximum version does the trick)
      • That is so cool. The animal name of my choosing was 'Slug', which resulted in getting the Mozilla product name "ThunderSlug"...
      • by Arkham ( 10779 )
        You just want someone to name their product Buttmonkey.

        Yeah, I looked at your source.
      • by Nullav ( 1053766 )
        ...ButtFish. I shall use this. Thank you for your generous gift to the Free Software community.
    • Re: (Score:2, Informative)

      by Cochonou ( 576531 )
      Your post made me laugh - a lot.

      But incidentally, the squirrelfish is an actual fish (just like it turned out that the firefox was also an existing animal).
    • by Sentry21 ( 8183 )
    • Re: (Score:3, Funny)

      by prog-guru ( 129751 )

      What's next? rabbitelephant? curvaceous coelacanth? fishfishfishrabbitfish?
    • Well to appeal to the business community we need a seriously imaginative name such as 'webkit 7' ;)
    • by Jon Abbott ( 723 )
      Dopefish! [dopefish.com] Oh wait, that's what came first...
  • Still Stateless (Score:4, Insightful)

    by Lumenary7204 ( 706407 ) on Tuesday June 03, 2008 @03:12PM (#23642219)
    This still isn't going to fix the fact that (X)HTML pages are transported and managed by what is still fundamentally a stateless protocol, XMLHttpRequest and AJAX notwithstanding.

    Every time you click a button that triggers a server-side transaction, the page needs to explicitly transmit info - a cookie, GET/POST variables, something - back to the server to "remind" it of its current state.

    To me, this would seem to be where most of our time is wasted...
    • I don't think that idea will be too popular. Making it stateful would severely impact the scalability, resource demand, and robustness of modern websites.
    • Re: (Score:2, Insightful)

      by MightyYar ( 622222 )

      This still isn't going to fix the fact that (X)HTML pages are transported and managed by what is still fundamentally a stateless protocol

      I guess I don't understand. Wouldn't having a better browser-side language make stateful applications more likely? I mean, you dismiss AJAX, but wouldn't faster Javascript make AJAX much better? There are already many "applications" that run on the web that are very similar to their desktop counterparts - better performance will make these more common, I would think.

      • Re:Still Stateless (Score:5, Informative)

        by moderatorrater ( 1095745 ) on Tuesday June 03, 2008 @04:08PM (#23642991)
        He's not commenting on stateless applications, but the stateless quality of http. Every time the browser communicates with the server, it has the exact same overhead, whether it's an ajax request or a full web page. The amount that's sent back may differ, but it's still sending all the information associated with instantiating a new connection, sending information about the browser, all the cookies, etc. When you build stateful applications on top of http, you incur a lot of overhead in those headers and cookies being sent back and forth. For applications trying to stay synced to the server, some people have found overhead of over 75%. This inefficiency's being made up for in packing more information into each request, stretching requests out to take up more time, and just plain fast processors and connections.

        The GP is saying that if we had a stateful protocol, we could eliminate most of the overhead and make applications move a lot faster.
        • Re: (Score:2, Insightful)

          by XHIIHIIHX ( 918333 ) *
          Well, for lazy website operators maybe. You can strip your HTTP headers down to almost nothing, and you only need one cookie to accomplish everything.
        • by naasking ( 94116 )
          A stateful protocol wouldn't help at all. Who's maintaining that state? Server-side: hello DoS. Client-side: well that's just a stateless protocol.

          HTTP is perfectly fine the way it is. Seamless stateful interaction is a problem for server-side languages/frameworks to handle. Don't blame HTTP for a framework deficiency. There are frameworks that don't have this problem [sourceforge.net].
        • The future of the web is a browser and server that is exactly like what we have now, but ditches HTTP in favor of a single TCP socket per client, like X.

          The gains made in responsiveness and security would be huge, not to mention the fact that the server could push data without the client asking for it first.

    • Re: (Score:2, Informative)

      by ergo98 ( 9391 )

      Every time you click a button that triggers a server-side transaction, the page needs to explicitly transmit info - a cookie, GET/POST variables, something - back to the server to "remind" it of its current state.

      How do you think TCP works atop the "stateless" IP?

      The whole stateless/stateful thing is extremely dated, and not even logically correct. HTTP w/cookies is stateful, and has been for a long time [yafla.com].

    • Re: (Score:3, Insightful)

      by _|()|\| ( 159991 )

      To me, [the network] would seem to be where most of our time is wasted...

      As always, it depends. I can think of two cases offhand in which bandwidth was cheaper than JavaScript/DOM. Using JavaScript to zebra stripe a large table basically hanged the browser for five seconds, so we added a class to every other row and used CSS. Adding and deleting rows to a large table was extremely slow, so we rendered all of the extra rows hidden by default, then unhid them as necessary. (Not a general solution, but it wo

    • Re:Still Stateless (Score:4, Insightful)

      by Jamie Lokier ( 104820 ) on Tuesday June 03, 2008 @04:17PM (#23643105) Homepage
      The only state most apps need to send to the server are a "session" value (cookie or form) and any data you entered. Complex page state is kept on the server, in a session-associated database. If requests are sending a little more, no harm. If they're sending a lot of data every time, which isn't user-entered on that specific page, they're not very well coded.

      The real excessive state transfer is the repeated sending of large quantities of (X)HTML from server to client. AJAX helps with that.
    • Not just time, but bandwidth. The overhead of an HTTP poll is staggering when you consider than in many cases the reply is 'no, no data for you yet.' I saw a proof of concept a few years ago delivering XHTML snippets over XMPP - when the sever had more data for the client, it pushed it. No polling, very little latency. Shame there isn't an XMLXmppRequest equivalent to XMLHttpRequest in modern browsers.
    • by mstone ( 8523 )
      Most of our time is wasted getting the first bit from the client to the server.

      Most network provider SLAs guarantee a round-trip time of around 50 ms or less between any two backbone routers in the US. Add consumer connection latencies (which tend to suck), non-backbone routers, HTTP connection latencies, process spawning times, and all that, and we can expect to see a delay of maybe 50-75 ms from the moment the first bit leaves the client machine and the moment it can be processed by the server.

      Once the f
  • I think all the future web browser should have a standard javascript and CSS, plug-and-play function, allowing users to choose their favorite javascript interpreter and CSS interpreter. For instance, I like Safari's javascript interpreter, but firefox CSS interpreter.. then I can just get those plug-in and put it in my browser (which have a built in HTML interpreter.)
    • by Daniel Dvorkin ( 106857 ) * on Tuesday June 03, 2008 @03:23PM (#23642375) Homepage Journal
      It's the old "modular vs. monolithic" argument -- do you write your app as a bunch of small pieces that all communicate through some standard protocol, so you can swap them in and out and upgrade them at will, or do you make everything tightly coupled and interdependent? Browsers, like most apps, tend to go back and forth on this, because there are real advantages and disadvantages to each approach (and most apps end up meeting somewhere in the middle.) Every few years someone comes along with an idea that promises to Revolutionize! Programming! by making everything modular and completely independent, and everyone gets all excited about it and plays with it for a while, and then comes to the conclusion that if it works, it's still too slow. The good ideas that come out of these Revolutions! In! Programming! get absorbed into the mainstream (e.g. OOP, and to some degree microkernels) but they never seem to take over completely.
    • by Goaway ( 82658 ) on Tuesday June 03, 2008 @04:04PM (#23642949) Homepage
      There is no such thing as a "CSS interpreter" separate from the browser. The rendering engine and the CSS handling code are almost entirely one and the same.
    • by cnettel ( 836611 )
      As much of the actual GUI in Firefox is dependent on its Javascript implementation, what you are proposing is far from trivial (at least, you have to agree on an object model, and then you are actually close to Windows Scripting!).
  • Can anyone tell me if this will hit Konqueror in the new KDE?

    I've been running KDE 4.1 through debian experimental. Buggy right now, but no show-stoppers. KDE and the new Konqueror are surprisingly fast.
    • Re:Konqueror/KDE 4.? (Score:4, Informative)

      by Anonymous Coward on Tuesday June 03, 2008 @03:42PM (#23642637)
      Oh yes I can tell you: konqueror is coming with khtml by default. There's a webkitkpart ()which is not quite ready) and there's a GSoC student working on it though so it might work better at end of this summer. IF you want an open source browser in linux using Webkit, you can use Arora: http://code.google.com/p/arora/ or the epiphany branch that uses webkit.
    • by makomk ( 752139 )
      Nope. However, Konqueror also has a new bytecode-based JavaScript interpreter known as KJS/Frostbyte [kdedevelopers.org], which hit trunk in time for KDE 4.1 and also gives some nice performance improvements. (It was actually merged into trunk several weeks ago, at about the same time as Squirrelfish was announced on the WebKit mailing list. I suspect it may actually predate Squirrelfish.)
  • Will Apple continue to play nice with the OSS world and release this new engine back to KHTML?

    And is it possible that Tamarin (Mozilla's version of this) and this will merge, creating some ridiculous new super-fast JS magic engine, interpreting code yet to be written?
    • Re: (Score:3, Informative)

      by Anonymous Coward
      the source is already available: http://trac.webkit.org/browser/trunk/JavaScriptCore

      why would the WebKit team be interested in merging code from a slower implementation?
    • by pembo13 ( 770295 )
      I am pretty sure that KHTML has been merged/dropped and there is only WebKit in Konqueror now. But I am subject to correction on this.
      • Re:Real question: (Score:4, Interesting)

        by paulpach ( 798828 ) on Tuesday June 03, 2008 @04:31PM (#23643295)

        I am pretty sure that KHTML has been merged/dropped and there is only WebKit in Konqueror now. But I am subject to correction on this.

        no, konqueror is using KHTML and will use it in the foreseeable future. There is a google summer of code to work on the webkit kpart [google.com] so that konqueror will be able to use webkit by the end of the summer, but it will probably wont be mainstream for a while:

        A while ago, konqueror developers posted a FAQ [froglogic.com] describing the future of KHTML. As of today, it still applies

        A posibility is that, a new browser may be added to kde, that will be tunned for web browsing as opposed to konqueror which is a swiss army knife. Kind of what Dolphin did for file management. There is already a webkit based browser [google.com] in the works that could achieve this in the future.

    • by samkass ( 174571 )
      I think the KHTML guys just adopted WebKit at this point, but Apple has been extremely good with the OSS community in this respect.

      And I think Tamarin uses a fundamentally different approach, so I don't see a merger here. But friendly competition is always good.
  • Javascript grows up (Score:5, Informative)

    by Slur ( 61510 ) on Tuesday June 03, 2008 @03:20PM (#23642337) Homepage Journal

    "With JavaScript surviving as a Web-page mainstay despite many early gripes..."
    This notion has long been outmoded... well, at least for the past few years.

    Javascript is doing more than just surviving. Early implementations of Javascript were quite buggy and standards were pretty lax. Things have improved significantly since "Javascript" became ECMAScript. The name may still have "script" in it, but it's a huge misnomer. Javascript is a full-fledged language - a very powerful one with many unique properties, and very useful if you know how to apply design patterns.

    I encourage anyone involved in building websites, widgets, or enterprise applications to check out the Javascript lecture series by Douglas Crockford of Yahoo! located at http://video.yahoo.com/video/play?vid=111585 [yahoo.com] to get a real feel for the power of modern Javascript.

    And have a look at the modern AJAX frameworks like YUI and JQuery, which are being used to develop some pretty complex applications.
    • Re: (Score:3, Interesting)

      by p0tat03 ( 985078 )

      I think early gripes were due to the fact that practically nobody was using JavaScript in a productive way. No, we don't want bouncing images, no, we don't want insane drop-down menus that don't behave right, and no, we don't want the web page to break back/forward buttons by doing some underhanded sly scripting work. Or worse! We don't want no stupid popups that ask me for my name.

      I think JavaScript has proven now that it has *some* redeeming qualities and legitimate uses :)

      • This [sourceforge.net] is why I love Javascript. It's a beautiful language, with so many nice high-level features like first-class functions and built-in regexps.

        The problems only come when you have to make it compatible with multiple browsers. If you just target one, it's a dream to use.
    • by djbckr ( 673156 ) on Tuesday June 03, 2008 @04:13PM (#23643049)
      I appreciate the sophistication of JavaScript, but I hate writing in it. Call me old fashioned I suppose, but I simply don't like the holes you can dig in JS. I recently became aware of the Google Web Toolkit. It really bridges the structured programming I like with the Web 2.0 feel I like my sites to have.
    • by jamshid ( 140925 ) on Tuesday June 03, 2008 @04:30PM (#23643277)
      This was a really interesting article about the kind of optimizations javascript is getting. Btw, it still amazes me that after the GUI class library wars of the 90s, and all those Java ui frameworks in the early 2000s, "javascript over http" is state of the art in human-computer interfaces. Anyone who would have accurately predicted this future would have been labeled a madman.

      http://steve-yegge.blogspot.com/2008/05/dynamic-languages-strike-back.html [blogspot.com]

      "Why JavaScript? Well, it was Ajax. See, what happened was... Lemme tell ya how it was supposed to be. JavaScript was going away. It doesn't matter whether you were Sun or Microsoft or anybody, right? JavaScript was going away, and it was gonna get replaced with... heh. Whatever your favorite language was.

      I mean, it wasn't actually the same for everybody. It might have been C#, it might have been Java, it might have been some new language, but it was going to be a modern language. A fast language. It was gonna be a scalable language, in the sense of large-scale engineering. Building desktop apps. That's the way it was gonna be.

      The way it's really gonna be, is JavaScript is gonna become one of the smokin'-est fast languages out there. And I mean smokin' fast."
    • very useful if you know how to apply design patterns.

      If we're talking about *Javascript* design patterns -- common useful Javascript idioms -- then I think this is a useful statement. If we're talking about common idioms that have filtered out from C++ and Java known as "design patterns" as applied to languages that don't need to many of them, then I'd say Javascript is pretty useful even if you don't know much about them. Possibly more useful.

      http://www.nofluffjuststuff.com/show_session_view.jsp?presentat [nofluffjuststuff.com]
  • The sooner this happens the sooner a jython engine (javascript->python interpretter) becomes possible. Get google to host it like the library hosting article of a few days ago, and profit!

    //why are you looking at me funny?
    • Re: (Score:3, Informative)

      Jython is Java + Python, not Javascript + Python. Two completely different beasties.

      That being said, if the Squirrelfish VM and interpreter strategy are applicable to other languages besides JavaScript, some sort of "JSPython" strategy for putting lightweight (i.e., not requiring the JVM as Jython does) client-side Python scripts on web pages would be pretty cool. There doesn't seem to be any suggestion of that so far; presumably (and quite sensibly) the Squirrelfish folks are concentrating on getting eve
      • right that was what the "take two" was about - i was aware of the java bridge. But with the random crowd this place gets, thanks for making that clearer. I saw a project that emitted JS from Java bytecode. I was just wondering if the same thing could be done, rendering JS from python bytecode. i think about this too often. They should, as a earlier post said, make the intepretters pluggable. that's a big difference from just allowing any ol' code to run - i would imagine the contenders could be vetted pret
        • Re: (Score:2, Offtopic)

          by MightyYar ( 622222 )
          Well, there's pyjamas [google.com], which does the same thing that Google Web Toolkit does, only with Python.

          There's also a demo around somewhere of someone using PyPy [codespeak.net] to compile the Bubble Bobble game to Javascript from Python.
  • I'm surprised that they weren't already using either a threaded interpreter or a bytecode interpreter. It's such an obvious step when performance is on the line, and I'm sure there must be a variety of such interpreters available, it's been a standard middle-ground between a fully compiled language and a raw text interpreter since the '70s at least... the DEC Fortran compiler used threaded code, and Smalltalk used bytecode, both in the '70s.

    I won't even get into the fact that they weren't doing almost-free
  • by Etcetera ( 14711 ) on Tuesday June 03, 2008 @04:16PM (#23643087) Homepage
    With there finally being a nice Javascript implementation that cleanly and efficiently sends to bytecode, I'm wondering if the dynamically-typed-specs in the Parrotcode [parrotcode.org] project could be of some assistance. The ECMAscript implementation they're already working on still has a long way to go, and this would be one way to help consolidate development efforts -- plus get some additional motivation behind both projects.
  • Are there any browsers under Linux that track the current (nightly) Webkit libraries? I've seen a couple of them online, but a lot just seem like real basic wrappers and aren't updated.

    I wanna give it a try, I've more and more begun to not enjoy running firefox under my older hardware.
    • by IceFox ( 18179 )
      We don't have nightly binaries, but the Arora browser does build against webkit trunk: http://arora-browser.org/ [arora-browser.org] And it is not a wrapper, but it has enough that I can use for the majority of my browsing.
    • Are there any browsers under Linux that track the current (nightly) Webkit libraries?

      There are prerelease versions of Konqueror and Epiphany. The trick is getting them to consistently install. I've had as many failures installing the pre-release version of Konqueror as successes.

  • I think this has great potential. Now that all the parsing/compiling has been accomplished, I think they should setup a system where they can store the compiled bytecode and simply retrieve it if a page with the same javascript is loaded (ex. reloading GMail).

    The next step after that is of course to JIT the bytecode, meaning compile it to the native architecture and eliminate all unnecessary calls. Even without further optimization or register allocation, such JIT-ed code would run MUCH MUCH faster. Unfortu
  • Great name (Score:5, Funny)

    by szquirrel ( 140575 ) on Tuesday June 03, 2008 @04:39PM (#23643411) Homepage
    Truly the 200X decade will be remembered as the "Decade of Retarded Technology Names".

    Did someone make a rule that every new tech has to have a name that would make me sound like a fucking idiot if I tried to pitch it to my boss?
  • Just downloaded the latest build for MacOS X, and thought I'd try the Acid3 test to see whether there was tangible speed improvement, and I can't load the darn URL. I guess I wasn't the only one who thought of this...
    • I can access the Acid3 test [acidtests.org] without problem. I also wonder if this will cause Safari to pass the performance aspect of Acid3 [hixie.ch]. If it doesn't now, it looks like it soon will.
      • I can get there now, too. Maybe it was slashdotted, maybe something else was wrong.

        It get 100, but certainly not under 33ms, and there's no custom favicon. So I guess there's still some work to be done...
        • by bdash ( 598142 )
          There's not supposed to be a custom favicon.

          Ian Hickson says: [hixie.ch]

          If a browser passes all 100/100 subtests and gets the rendering pixel-for-pixel correct (including the favicon!), then it has passed the standards-compliance parts of the Acid3 test. The rest is just a competition for who can be the fastest.

          Firefox 3rc1 displays a red cat favicon when viewing the Acid3 test page, but no favicon when viewing the reference rendering. A quick inspection of the source of the reference rendering shows that the lack of

  • by sqldr ( 838964 ) on Tuesday June 03, 2008 @05:31PM (#23644069)
    the framework behind (among others) Safari and Safari Mobile,

    It's bad enough that web developers ignore my minority browser (rather than defaulting it to the same template as safari), but ignoring the history of webkit completely must be hugely insulting to the authors of khtml. Give them some credit.
    • Re: (Score:2, Interesting)

      by Gavagai80 ( 1275204 )
      Konquoror doesn't use webkit (yet), so it'd be silly to lengthen every summary to insert "and by the way, webkit owes a lot to konqueror's KHTML." Not compulsively mentioning something where it's irrelevant to the subject isn't the same as ignoring something.
      • It's a shame you got modded Troll. You are factually correct. Konqueror uses KHTML, but it's planning to switch to WebKit. WebKit derives from KHTML, but sheez, you can't list the whole history everything in a short blurb.
  • Will this Rhino (http://www.mozilla.org/rhino/) get this? We use it extensively as an embedded scripting engine on the server side - and use a lot of CPU cycles running it.
  • Will it run Linux?
  • Still seems slow... (Score:5, Interesting)

    by Jon Abbott ( 723 ) on Tuesday June 03, 2008 @10:47PM (#23647015) Homepage
    I just loaded the latest nightly of WebKit, which from what I gather is supposed to be using Squirrelfish, but it still seems to run the "Wundermap" at wunderground.com more slowly than Firefox 3.0RC1. I consider the Wundermap to be the ultimate browser test, as it has to process so much JS and images... I recently tried running it on a Mac Pro 8-core at the local Apple store, and it still loaded slowly! The Mac Pro absolutely flew through everything else I threw at it, including playing multiple HD movies at the same time, but when loading the Wundermap, it is almost as slow as my puny 2.0 GHz G5.

"Conversion, fastidious Goddess, loves blood better than brick, and feasts most subtly on the human will." -- Virginia Woolf, "Mrs. Dalloway"