Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
Handhelds Businesses Apple Hardware

Inkwell No Longer From the Newton? 65

CrezzyMan writes "From this post on the Newtontalk.net mailing list: Some of you may be interested to know that in the Inkwell section on Apple's website the following original text (straight after the keynote): 'Based on the Newton's 'Print Recognizer'-widely considered to be the world's first genuinely usable handwriting recognition solution-Inkwell's handwriting recognition is highly accurate and extensively tested' has been changed to: 'Built on Apple's Recognition Engine - Inkwell's handwriting recognition is the best in the industry.' Steve must really hate the Newton..." I'd be more likely to consider Inkwell a good technology if I knew it was from the Newton, but I was an actual Newton user. Most people erroneously think the HWR in Newton OS was bad (thanks to The Simpsons!).
This discussion has been archived. No new comments can be posted.

Inkwell No Longer From the Newton?

Comments Filter:
  • Simpsons (Score:2, Funny)

    by Anonymous Coward
    Hey, Dolf. Take a memo on your Newton. "Beat up Martin."

    *writes memo*

    *Newton translates to: "Eat up Martha."*

    Bah.

    *Throws Newton at Martin*
  • The Simpsons? (Score:3, Informative)

    by Captain Pedantic ( 531610 ) on Thursday August 01, 2002 @08:54AM (#3991341) Homepage

    Wasn't it Doonsebury [doonesbury.com] which effectively killed off any hope for the newton> [handy.ru]
  • by Anonymous Coward on Thursday August 01, 2002 @08:54AM (#3991344)
    Neural networks provide robust character recognition for Newton PDAs

    Larry Yaeger, Apple Computer

    While on-line handwriting recognition is an area of long-standing and ongoing research, the recent emergence of portable, pen-based computers (personal digital assistants, or PDAs) has focused urgent attention on usable, practical solutions.

    Pen-based PDAs depend wholly on fast and accurate handwriting recognition, because the pen serves as the primary means for inputting data to the devices. To meet this need, we have combined an artificial neural network (ANN) character classifier with context-driven search-over character segmentation, word segmentation, and word recognition hypotheses to provide robust recognition of hand-printed English text in new models of Apple Computer's Newton MessagePad.

    Earlier attempts at handwriting recognition used strong, limited language models to maximize accuracy. However, this approach failed in real-world applications, generating disturbing and seemingly random word substitutions known colloquially within Apple as "The Doonesbury Effect" (due to Gary Trudeau's biting satire based on first-generation Newton recognition performance). We have taken an alternative approach, using bottom-up classification techniques based on trainable ANNs, in combination with comprehensive but weakly applied language models. By simultaneously providing accurate character-level recognition, via the ANN, with dictionaries exhibiting very wide coverage of the language (as well as special constructs such as date, time, and phone numbers), plus the ability to write entirely outside those dictionaries (at a low probability), we have produced a hand-print recognizer that some have called the first usable handwriting recognition system.

    The core of Apple's print recognizer is the ANN character classifier. We chose ANN technology at the outset for a number of key attributes. First, it is inherently data-driven-it learns directly from examples of the kind of data it must ultimately classify. Second, ANNs can carve up the sample space effectively, with nonlinear decision boundaries that yield excellent generalization, given sufficient training data. This results in an ability to accurately classify similar but novel patterns, and avoids certain classic, subtle data dependencies exhibited by hidden Markov models (HMMs), template matching, and other schemes, such as over-sensitivity to hooks on tails, pen skips, and the like. In addition, there is a rich literature demonstrating the applicability of ANNs to producing accurate estimates of a posteriori probabilities for each class, given the inputs.

    In some respects, our ANN classifier is quite generic, being trained with standard error backpropagation (BP). Our network's architecture takes advantage of previous work, indicating that combined, multiple recognizers can be much more accurate than any single classifier. However, we combine those parallel classifiers in a unique fashion, tying them together into a single, integrated multiple-representations architecture, with the last hidden layer for each, otherwise independent, classifier connected to a final, shared output layer. We take one classifier that sees primarily stroke features (tangent slope resampled to a fixed number of points), and another classifier that sees primarily an anti-aliased image, and combine them only at the final output layer. This architecture allows standard BP to learn the best way to combine the multiple classifiers, which is both powerful and convenient.

    Training an ANN character classifier for use in a maximum-likelihood word recognition system has different constraints than would training such a network for stand-alone character recognition. In particular, we have devised several innovative network training techniques, all of which modestly degrade the accuracy of the network as a pure character classifier, yet dramatically improve the accuracy of the word recognition system as a whole.

    The first of these techniques we refer to as NormOutErr, short for "normalized output error." Training an ANN to classify 1-of-N targets with standard BP produces a classifier that does a fine job of estimating p(class|input) for the top-choice class. However, BP's least mean-squared error solution, together with typical classification vectors-that consist of all 0s except for a single 1 corresponding to the target class-results in a classifier that does not estimate second- and third-choice probabilities well. Rather, such classifiers tend to make unambiguous single-choice classifications of patterns that are, in fact, inherently ambiguous. The result is a class of recognition errors involving a single misclassified letter (where the correct interpretation is assigned a zero or near-zero probability) that causes the search to reject the entire, correct word.

    We speculated that this effect might be due to the preponderance of 0s relative to 1s in the target vectors, as seen at any given output unit. Lacking any method for accurately reflecting target ambiguity in the training vectors, we tried partially normalizing this "pressure toward 0" relative to the "pressure toward 1." We did this by modulating the error seen at nontarget output units by a scale factor, while leaving the error at the target output unit unmodified. This generally increased the activation levels of the output units, and forced the network to allocate more of its resources to the modeling of low probability samples and classes. Most significantly, it allowed the network to model second- and third-choice probabilities, thus making the ANN classifier a better citizen in the larger recognition system. While this technique reduced top-choice character accuracy on the order of a percent, it dramatically increased word-level accuracy, resulting in approximately a 30% reduction in word-level error rate.

    Another of the techniques we apply routinely in our ANN training is what we call frequency balancing. Training data from natural English words and phrases exhibit very nonuniform priors for the various character classes, and ANNs readily model these priors. However, as with NormOutErr, we find that reducing the effect of these priors on the net, in a controlled way, and thus forcing the net to allocate more of its resources to low-frequency, low-probability classes, significantly benefits the overall word recognition process. To this end, we explicitly (partially) balance the frequencies of the classes during training. We do this by probabilistically skipping and repeating patterns, based on a precomputed repetition factor. (Each presentation of a repeated pattern is "warped" uniquely, as discussed later.) This balancing of class frequencies is conceptually related to a common method for converting from ANN estimates of posterior probability p(class|input), to the value needed in an HMM or Viterbi search p(input|class), which is to divide by p(class) priors. However, our approach avoids potentially noisy estimates of low-probability classes resulting from division by small numbers, and eliminates the need for subsequent renormalization. Again, character-level accuracy suffers slightly by the application of this technique, but word-level accuracy improves significantly.

    While frequency balancing corrects for under-represented classes, it cannot account for under-represented writing styles. We use a probabilistic skipping of patterns to address this problem as well, but this time for just those patterns that the net correctly classifies in its forward/recognition pass, which results in a form of error emphasis. We define a correct-train probability for use as a biased coin to determine whether a particular pattern, having been correctly classified, will also be used for the backward/training pass. This only applies to correctly segmented, or positive patterns, and misclassified patterns are never skipped. Especially during early stages of training, we set this parameter fairly low, thus concentrating most of the training time and the net's learning capability on patterns that are more difficult to correctly classify. This is the only way we were able to get the net to learn to correctly classify unusual character variants, such as a three-stroke "5" as written by only one training writer.

    Other special training techniques include negative training--presenting missegmented collections of strokes as training patterns, along with all-zero target vectors--and stroke warping--deliberate random variations in stroke data, consisting of small changes in skew, rotation, and x and y linear and quadratic scalings. During recognition, the ANN classifier will necessarily encounter both valid and invalid combinations of strokes, and must classify them as characters. Negative training helps by tuning the net to suppress its output activations for invalid combinations, thus reducing the likelihood that those missegmentations will find a place in the optimum search path. Stroke warping effectively extends the data set to similar, but subtly different writing styles, and enforces certain useful invariances.

    Two practical considerations in building an ANN-based system for a hand-held device are speed and memory limitations. Especially for the ARM 610 chip that drives the Newton MessagePad 120 and 130 units, 8-bit integer operations are much faster than either longer-integer or floating-point operations, and cache coherency benefits from reduced data sizes. In addition, memory is at a premium in these devices. So, despite previous work that suggests ANN training requires roughly 16-bit weights, we were highly motivated to make 8-bit weights work. We took advantage of the fact that the ANN's forward/recognition pass is significantly less demanding, in terms of precision, than is the backward/learning pass. It turns out that 1-byte (8-bit) weights are sufficient if the weights are properly trained. We limit the dynamic range of floating-point weights during training, and then round to the desired precision after convergence. If the weight limit is enforced during high-precision training, the net's resources will adapt to compensate for the limit. Because bias weights are few in number, however, and very important, we let them use 2 bytes with essentially unlimited range. Performing our forward/recognition pass with low-precision, 1-byte weights (a 3.4 fixed-point representation, ranging from almost -8 to +8 in 1/16 increments), we find no noticeable degradation relative to floating-point, 4- or 2-byte weights using this scheme. We have also developed a net training algorithm based on 8-bit weights, by appending an additional 2 bytes, during the backward/training pass only, that accumulate low-order changes, only occasionally carrying over into the primary 8-bit range, which affects the forward/recognition pass.

    So, in summary, we have devised several techniques for using and training an ANN classifier that is to be embedded in a higher-level recognition system. Some, such as limited precision weights, are a direct result of physical limitations of the device. Others derive from the fact that an ANN classifier providing class probability estimates to a search engine necessarily has different constraints than does such a classifier operating alone. Despite the seemingly disparate nature of the various techniques we've described, there does seem to be a unifying theme, which is that reducing the effect of a priori biases in the data on network learning significantly improves the system's overall accuracy. Normalization of output error prevents overrepresented nontarget classes from biasing the net against underrepresented target classes. Frequency balancing prevents over-represented target classes from biasing the net against under-represented target classes. And error emphasis prevents over-represented writing styles from biasing the net against under-represented writing styles.

    One could even argue that negative training eliminates an absolute bias toward properly segmented characters, and that stroke warping reduces the bias toward those writing styles found in the training data, although these techniques provide wholly new information to the system as well. The general effect may be related to the technique of dividing out priors, as is sometimes done to convert from p(class|input) to p(input|class). In any event, it is clear that paying attention to such biases and taking steps to modulate them represent a vital component of effectively training a neural network serving as a classifier in a maximum-likelihood recognition system. It is also clear that ANN classifiers in conjunction with optimal search strategies provide a degree of accuracy and robustness that is otherwise difficult to obtain.

    This work was performed in collaboration with Richard Lyon (Apple), Brandyn Webb (The Future), Bill Stafford (Apple), and Les Vogel (Angel Island Technologies). We are also indebted to many supportive and contributing colleagues at Apple and in the connectionist community. A more detailed, technical discussion of our recognition system is available through my Web page (Larry Yaeger-pen-based character recognition http://www.atg.apple.com/personal/yaeger.

    Larry Yaeger is technical lead at Apple Computer in the development of the neural network-based hand-print recognition system used in second-generation Newton PDAs. At Digital Productions, he used a Cray X-MP supercomputer to generate the computer-graphics special effects for Hollywood films The Last Starfighter, 2010, and Labyrinth. While with Alan Kay's Vivarium Program at Apple, he designed and programmed a computer "voice" for Koko the gorilla, and created the PolyWorld artificial-life computational ecology that evolves neural architectures resulting from the mutation and recombination of genetic codes, via behavior-based, sexual reproduction of artificial organisms. Contact him at larryy@apple.com
  • by ralphc ( 157673 ) on Thursday August 01, 2002 @09:43AM (#3991576)
    In Wired, years ago...
    Q: How many Newton users does it take to change a light bulb?
    A: Foux! There to eat lemons, ore axle soup.
  • by iamiuru ( 100670 ) on Thursday August 01, 2002 @10:12AM (#3991794)
    All over the mac rumor sites there had been (well the conversations pop up ever now and again) conversation about a new PDA. People figured, hey if its in Jaguar then they have to be working on a new PDA, even though Apple has stated they do not want to get back into that world (could be disinformation though).

    Personally I think its pretty cool to be able to hand write something (typing can be faster but not for everyone) into any application or draw a quick middle finger to your boss in an email (quicker then ascii art - unless of course you have a repository of that sort of thing).

    But hey the first uses of the handwriting recognition on OSX have been at the apple stores. I may be incorrect but the pad that you sign your name into for your credit card receipt may be using it. Heard someone at the NY store talking about the fact that the little signature devices were also using the handwriting recognition software to match up against what your credit card's stripe has on it. If it's true, its a nice real world experiment to tweak out the software.

    • *probably* not. Pen-based POS (point-of-sale) terminals simply capture signatures, though there was talk this past spring of some recognition schemes getting the green light for development, particularly in restaurants & hotels...

      Plus, they'd never get mine - thanks to its style, and the same goes for most signatures - they are hightly stylized, and the recognizers rely on you using *fairly* standard block and cursive letters.

      If that *is* an Inkwell pad on their POS mac, it's likely just for capturing. Associating the ascii on my credit card just once to a scrawl is pretty useless. Once trained, it might be of some use, but then you'd have to spread those trained signatures across the Apple retail system so they'd be of use in subsequent sales, and that gets unwieldy not to mention scary...
  • by singularity ( 2031 ) <nowalmartNO@SPAMgmail.com> on Thursday August 01, 2002 @10:48AM (#3992069) Homepage Journal
    I never used a Newton, despite being a big Apple fan. I just never had the money when they were available.

    When I hear "from Newton", though, I think of older technology. The Newton may have been great, but it was out a long time ago. Just rolling a Newton technology into the newest version of OS X seems like something I would not get excited about.

    So my guess is that it is just a marketing decision.

    The other thing (I do not think this) is that there are people that are going to look and equate Newton with "market failure." Once again, the marketing types are nt going to want people to think that about a new technology.

    Inkwell may be based on Newton's recognition, but marketing does have some reasons not to make that obvious.


    • Yeah, but slashdot will never pass up the chance to perpetuate the myth that Steve Jobs is a petulant child who changed the text to spite those "newton lovers!"

      As if he didn't point out that it was from the newton at WWDC.
  • Comment removed (Score:4, Interesting)

    by account_deleted ( 4530225 ) on Thursday August 01, 2002 @11:25AM (#3992379)
    Comment removed based on user account deletion
    • isn't the smallest iBook now rather SMALLER than the eMate 300? Heavier, sure, but a HELL of a lot more capable - including Inkwell when installed - all it really needs is a surface to scribble on - and a modified trackpad (ie much bigger) should suffice.
  • Oh grow up. (Score:3, Interesting)

    by fm6 ( 162816 ) on Thursday August 01, 2002 @12:12PM (#3992740) Homepage Journal
    Most people erroneously think the HWR in Newton OS was bad (thanks to The Simpsons!).
    No, it was thanks to the thousands of Newton early adopters (including me) who experienced this problem. Not only was the handwriting recognition itself problematic, but the features that were supposed to work around the problem were very badly designed. (The code that added new words to the recognition dictionary, for example, didn't understand that punctuation wasn't part of a word!) They did fix these problems in later versions, but by then the product had no hope of being widely accepted.
    • Re:Oh grow up. (Score:3, Interesting)

      by BitGeek ( 19506 )

      This is just wrong.

      I had a original 1.0 release of the software, and very poor handwriting.

      I had no trouble getting it to recognize my handwriting, and found it to be a delight to use.

      The product was rather widely accepted before Jobs came back and killed it. Spinning Newton Inc out was the correct idea and would have made apple money-- they certainly were a profitable organization.

      The newton was widely adopted and very successful. It just wasn't "Mass market" like the palm. But there was a huge industry.

      Like NeXT its one of those technologies that pundits poo pooed (ignorant of technology they are anyway) and people assumed was a failure that wasn't profitable.

      Hell, like APPLE.... even when it was 3 times the size of microsoft in revenues.

      • I wouldn't go quite that far, but I had 2 Newtons, a 120 and a 2100 (yep, I still use it) and I always found the HWR (both types) to be very good indeed. No joke, I can sit in the back of a cab on a bumpy London street taking notes while on the 'phone without dropping a SINGLE WORD. Personally, I'm very excited to think what should be possible when the Newton's HWR gets 2 Ghz G4s chips to back it up instead of my little StrongARM.
    • Re:Oh grow up. (Score:5, Informative)

      by DJSpray ( 135538 ) <paul&thepottshouse,org> on Thursday August 01, 2002 @06:20PM (#3995258) Homepage
      All right, to try and keep this from turning into an "it sucked! no, it didn't!" debate, some background.

      I used (and programmed) every version of Newton device. There were several generations of Newton recognition software. The first generation was actually licensed from a Russian company called Paragraph. There was some speculation that it would perform better with Cyrillic. You could set it for cursive recognition, and could also tweak the individual character shapes it was looking for.

      The algorithms were largely dictionary-based. Hence, it had a tendency to either do really well getting the words completely right, or really badly (substituting a really wacky word choice that was triggered a match). It was also possible to have the settings quite wrong for your handwriting, so that, for example, it did not know when you were breaking words (via letter spacing or pauses in your writing).

      People had very mixed results with it. If you did not use cursive, it tended to be even worse. There was some idle speculation that it probably did really well with Cyrillic cursive, due to the software's origins, but I never heard any substantiation of tht rumor.

      The original Newton (through OS 1.05) had many other problems, and clearly came out of the oven a bit too early. Battery life was poor. Memory was very limited. Recognition was extremely slow. One of the most noticeable was that the recognizer had a tendency to lock up and stop recognizing text; you had to hit the reset button to get it moving again. Fortunately, the early Newton stored data in flash, so even doing a reset after a severe crash, you were unlikely to lose any data. (You pretty much had to forcibly wipe the flash to do that).

      Version 2.0 of the Newton software, dubbed "Newton Intelligence", came about initially with the MessagePad 130 or a ROM update to the 120. Developers were able to get the ROM from Apple and do the replacement themselves. 2.0 featured a new recognition strategy: support the cursive recognizer, and also offer a new character-by-character print recognizer. This one worked much better for me, and a lot of other users thought so too. It would also work for cursive. This is the recognizer that presumably Apple has ported. Allegedly it was based on an ATG project. One of the things that made it better was that its algorithms were more character-based than dictionary-based; it tended not to pick completely incorrect words. Instead, you'd see what you wrote with perhaps one letter wrong. You could use gestures similar to proofreading marks to edit that one letter, or even over-write the single wrong letter.

      The Apple employees who described it at the Newton developer conferences spoke of it this way: when it made a mistake, instead of saying "HUH???", the user would say "huh."

      When I first used a Palm, I was very disappointed that they were not able to use a recognition engine like the Newton character recognizer. It really did work much better. Yes, I learned Graffiti, but I never liked it, and to this day I don't use a Palm. I would use the Newton character recognizer on a portable Palm-sized device quite happily.

      Apple doesn't get much credit for its innovations. Remember that the MessagePad 2100 could do all this, although somewhat awkwardly and perhaps just barely:

      - run Pocket Quicken or other checkbook apps
      - do shape recognition and editing
      - run a spreadsheet
      - run a graphing calculator/solver
      - store text as raw "ink" (compressed vector graphics) and recognize it at a later time
      - provide 2 PCMCIA memory card slots
      - do infrared data exchange
      - drive a modem card
      - run a mini- web browser
      - do shape recognition
      - do desktop sync
      - record and playback voice memos
      - do text-to-speech (in a pretty primitive Macintalk-1.0 way).
      - support a keyboard
      - ran applications written in very cool dynamic, interpreted, byte-coded language optimized for low memory footprint, using ideas from languages like Scheme and Self, but with a simple Pascal-like syntax. (This was pre-Java).

      Of course, it was also way too expensive, and Apple was not able to get the price down in time to gain market share. Flash memory was expensive. Static memory was expensive. The screen was expensive. It was expensive to assemble. If you ever took one apart, it was clear that it required a great deal of skilled labor to assemble. The screen itself was an elaborate sandwich of the recognizer, the LCD screen, and a backlight. There were wires running all over the innards. Compare that to a Palm device, which was designed for minimal chip count and minimal cost to manufacture. It was not just Apple's desire to maintain profit margins at work, although that no doubt played a part too.

      I personally was very fond of using the Newton to keep notes and balance my checkbook. The character-based recognizer and StrongARM chip made it usable.

      Nevertheless, when we developed software for naive new-to-the-device end users, we did everything with popups and radio buttons. Trying to get a novice user to successfully use any computer handwriting recognizer immediately is not yet feasible (and may not be for some time).

      I personally am rather annoyed to see Steve's childish behavior in ignoring and marginalizing all the R&D and many smart peoples' hard work that went into the Newton. Sure, it may have been a failure as a product in the long term, but it caught people's imagination and was definitely a technological success in many ways, and its technolgies have not been equalled in another product.

      The engineers who designed it don't deserve more snide behavior from Apple's self-appointed "savior." They took enough flak at the time for trying to create something so far ahead of the curve, and getting yanked around by having the Newton group spun in, spun out, and unceremoniously killed.

      The marketers and project managers leading certainly deserve a healthy share of the blame for the Newton's failure; in a way, they were letting the engineers design the product and stuff it with features, which gave it a high geek value but not much chance of mass-market success and not much cost-effectiveness, but then nickel-and-diming them on things like memory. It's unclear in retrospect what outcome they really could have expected under those circumstances.

      Paul R. Potts
      • I agree that Newton's HWR got better as time went on. I had the original model, and found the recognition to be poor enough that I replaced it with Graffiti when that option became available.

        In the 120, recognition was improved somewhat, but it seemed to really get good with the MP2000/2100. I used printed, rather than cursive, characters and found that it did a very good job.

        The data sharing capability between applications on the Newton has also yet to be equalled by any other device I've seen.

      • Re:Oh grow up. (Score:3, Informative)

        by Mr. Protocol ( 73424 )
        I used (and programmed) every version of Newton device. There were several generations of Newton recognition software. The first generation was actually licensed from a Russian company called Paragraph.
        I had a chance to talk to the folks at ParaGraph International at a mobile computing conference once. It was a very enlightening conversation. They all used to work at the Soviet Academy of Sciences, and had decided that rather than slowly starve in the post-Soviet era, they'd rather form a Russian equivalent of Bell Labs. Now, it turns out one thing they were really good at was curve-fitting. They could use higher-order polynomials to compress and characterize curves of arbitrary shape. As a demonstration, they'd taking a Picasso pencil sketch (in color) and compressed it down to 17K, then re-expanded it into something indistinguishable from the original.

        These folks told me that when Apple first contracted with them,they were held at such arm's length that they didn't even know what kind of device they were writing a recognizer for. They never even saw a Newton until they hit market. Hence,they had no opportunity to tune the recognizer. Those who've used Newtons know that things were difficult at best until Newton OS 2.0 came out for the 120. After that, it got much better (and the Rosetta printed recognizer really helped). That was the release that used the 'tuned' cursive recognizer, and with further tweaking in Newton OS 2.1, it pretty much rocks. No more Egg Freckles.

        Inkwell, of course, is based on Rosetta, not the ParaGraph recognizer, but the latter is available as a separate package for other PDAs.
      • It should also be noted that the Soups data structures (see a brief description [cbbrowne.com] of how they work, which would be immediately familiar to Lisp/Scheme hackers) has still not been recognized, embraced and exploited by the PDA community to date. This one innovation would make it far easier for Palm developers to leverage off of each others' work instead of the ludicrous workarounds we have today to try to store related data in other Palm applications' databases.

    • So you're saying that the early adopters set their hopes too high and then kept telling people it sucked even when it was very good? OK, I'll buy that, I'll just blame you instead.
      • by fm6 ( 162816 )
        Set our hopes too high? Because we expected basic input tasks to work reliably?

        Pudge, if you're going to turn into a flame warrior, then you should resign as an editor. The two roles are mutually exclusive.

    • OK, I never owned one... but at every trade show where Apple was exhibiting them I bellied up to the booth and made a very serious effort to see whether the Newton would work for me.

      It didn't come close.

      It wasn't an issue of missing a word here or there, it basically missed more of them than it got. I decided it was going to be useless to me and never got one.

      Your mileage may, of course vary.

      At WWDC 1996, I noticed that lots of attendees had Newtons--and that almost without exception they were using them with add-on keyboards.

      In contrast, I have very little trouble with the Graffiti system on the Palm. Slow, but perfectly usable.
      • I think you've pointed out a big overlooked issue with handwriting recognition. People are very different, and I suspect HW engines make assumptions that only work with some people. It's not a case of good handwriting versus bad. It's more to do with the poorly-understood differences between the way different people are wired, and use their hands. What neuropsychologists call the "neuromuscular melody".

        Though two things should be mentioned: early-model Newtons (which is apparently what you encountered) did a better job of recognizing an individual's handwriting if they took the time to "train" it to recognize your individual quirks. And later models did a pretty good job even without training. If they'd waited to perfect the handwriting technology the machine might have done better. Then again, there were other issues -- the ill considered form factor and the gawdawful desktop synchronization being the two biggest.

        Another individuality issue. Graffiti is good, but I never did come to terms with it -- I just don't think that way. Finally switched to Fitaly Stamp, which turns the Palm entry area into a keyboard with a proprietary layout. Not for everybody -- some might prefer qwerty or dvorak (though I think Fitaly's layout is very logical for its intended purpose). Others probably get along fine with Graffiti. And still others just don't get the whole PDA thing, and are honestly better off with a Daytimer. It's a matter of what works for you.

  • Doonesbury was complaining about HWR in Newton's long before Simpsons. Blame the lag time of TV.

  • Class notes (Score:3, Interesting)

    by PD ( 9577 ) <slashdotlinux@pdrap.org> on Thursday August 01, 2002 @12:30PM (#3992877) Homepage Journal
    I once took notes from a fast talking History professor for an hour and a half straight on my Newton 130, without a single error in reading my handwriting.
    • I have taken notes from fast-talking History professors for an hour to an hour and a half straight on my lined notebook paper. Later that day, I had to concentrate to determine what _I_ had writen. As for someone _else_ looking onto my notes... not gonna happen.

      So, with me not being able to read my handwriting sometimes, how's a computer?
      • Re:Class notes (Score:2, Informative)

        by PD ( 9577 )
        The computer is trained to recognise your writing, over time. You might have crappy writing, but if you're consistent about it the Newton can read it.
  • by g4dget ( 579145 ) on Thursday August 01, 2002 @01:42PM (#3993428)
    Apple did not single-handedly invent handwriting recognition (the techniques they used are very similar to those used by speech recognition and other handwriting recognition research), but they had the vision and foresight to be the first to try and build it into actual devices. You can find Larry's papers here [beanblossom.in.us].

    The sad thing is that, today, Apple isn't doing much of that sort of research and development anymore. As far as I can tell, Apple's ATG (Advanced Technology Group) doesn't exist anymore. Most of the people who used to do this kind of research have moved on to other jobs. Microsoft Research is much larger and much more visible in the scientific community than whatever remnants of research may remain at Apple. But Microsoft still produces lousy products despite the large amounts of money they invest in research.

    I think in the long run, Apple needs to invest heavily in research anymore or they'll be in trouble. And Microsoft needs to figure out how to take research results and put them into their software more successfully; unlike, say, IBM, Microsoft did not start out as an innovation-driven company, and probably lack the mechanisms for moving research results into products.

  • Can inkwell do unicode character sets? I hope so, because I want to be able to enter equations via graphics tablet... Just imagine inkwell hooked up to something like mathematica :)

    Of course the first thing I'm going to do when I get to play with inkwell is run vi.

  • During the video unveiling the Xserve, just before the Q&A section I believe, Steve Jobs started talking about some of the upcoming features in Jaguar. In describing inkwell, he says it used handwriting technology from "you know what", generating chuckles from the audience.

    Insert conspiracies here about Jobs not liking the Newton because it was invented after he left Apple.

    • I read once that Jobs bought a top of the line Newton, played with it for a bit, and threw it in the trash!

    • Newton was the baby of the guy (John Sculley) who pushed for Steve's removal. It's funny how, as Apple users, we often talk of how we stick together, but when it comes to Newton, things are different. Back up several years. Chrysler was unveiling its "new" concept called minivan. At a press conference with ever news agency, and auto personality there, Lee Iococca himself made the presentation. He went to open the side door, and it wouldn't budge. (Must be locked). He leaned inside, fumbled with the lock. It still wouldn't open. Chuckles were beginning to rise in the crowd. Finally, after several minutes of raucious laughter, they got the door open. Knowing Lee, it was obvious he was ready to use a crowbar, and I'm sure someone got iced over the whole incident. Well, we all know what a failure the minivan turned out to be, right. Anyone getting a clue? The Newton is derided by the uninformed, but let's look at the MacOS, and Windows for that matter. How long have they been around? Neither have made such amusing mistakes in handwriting translation as Newton did, because they never had the capability to begin with. Still, do they not crash? Do they not have faults that have been ongoing for more years than Newton was even in development? Did anyone have so much fun blasting MacOS versions 1 through 7? Those who have had so much fun picking on Newton have become empty, jaded computer geeks. You've forgotten the near-magical wonder of it all. Instead of being amazed by it, you spew vindictive remarks at something that was an astounding step in a new direction. To now hide this Inkwell lineage is only to cower to the noisy few (90% of the market? plus some Mac users) who have forgotten what it's all about....assuming they ever knew in the first place. Newton wasn't perfect, but it made tremendous improvement is a relatively short time, all the while existing in a company whose "leadership" basically fiddled while Rome burned. Apollo 1 burned up. Let's not go to the moon. The minivan door jammed on national TV so let's go hide, sweep it under the rug. Bad idea anyway. The first two versions of this new HWR, while getting better....much better in the 2K series Newton, still make mistakes, occasionally very funny, but hey, we need a perfect product now! No training wheel time for you, mister. People. Get a clue.
  • The original "Based on the Newton's 'Print Recognizer'" text is still on the Inkwell page of Apple's Asia site.

    It will probably change soon.

    http://www.asia.apple.com/macosx/10.2/inkwell.html [apple.com]

    However it looks like the Apple UK site hasn't been updated since MacWorld, so maybe not.

  • Steve must really hate the Newton...

    Steve is disassociating their handwriting software from a system that flopped LONG AGO. Most people don't know what a newton was. Those that do, know it flopped. Never mind the reasons or how great it was - it flopped. End of story.

    The only folks that care that it was based on that tech are a few (very few) newton fans. Face it, as a marketing bullet, "newton tech" is at best salt shot.
    • Steve is disassociating their handwriting software from a system that flopped LONG AGO. Most people don't know what a newton was. Those that do, know it flopped. Never mind the reasons or how great it was - it flopped. End of story.


      The Newton group had just turned profitable before Apple killed it. How did it flop?

      blakspot
      • It did not achieve marketshare.
        • It had just become profitable--had it more time it would have no doubt gained more marketshare. It floundered early on due to poor HWR and the fact that people didn't know what to make of a PDA. Just like the Amiga, ahead of its time...

          "Multimedia? You mean it's a game machine. You can't seriously do business with a game machine!"

          *rolls-eyes*

          blakepsot
  • I'm still using my eMate as a sort of nice phone number and addresses organisers today. When entering a new address I mainly use handwriting and it's still recognised perfectly. It has the 2.0 version of Newton OS which has much better handwriting algorithms than previous versions. One thing I like very much about the way handwriting recognition works are "gestures". Using special gestures one can change spacing between words, insert spacing, erase things, select words. I think this will also be present in Ink.
  • Look, it's simple marketing logic. The Newton was a failure in terms of a product line, although it was an extremely cool gadget with amazing technology. Apple doesn't want to associate a new product which they hope will succeed, to an old product that failed.

    The last thing that people need to misguidedly think, is that Apple is short on ideas and is having to scrounge through past failures to find new technology ideas.

    I think this is a wise decision on their part to give this technology a fresh image, seperate from the ridicule that the early-model Newtons got (i.e. The Simpsons with MessagePad 100, 110, etc), and well deserved.

    The fact is, the Newton 2x00 handwriting recognition of 3 years ago is better than anything else on the market today, and I'm sure with some modernization, it'll be positively excellent.

It is easier to write an incorrect program than understand a correct one.

Working...