Thursday, July 23, 2015

Adding Stock Quotes to a BriteBlox Display

Background


People interested in the stock market might be inclined to have a cool stock ticker in their room or office.  While it's not quite as useful as a whole bunch of monitors displaying entire panels of quotes and graphs, it does provide at least a mesmerizing distraction, not to mention your room now looks at least a little bit more like a real trading floor in a big office.  This was the original use case for BriteBlox, the modular LED marquee designed by #DoesItPew and myself.  Not only is the marquee itself modular (i.e. you can configure it in various lengths, heights, and shapes), but ideally the control software is modular, leading to a design where any coder can develop plugins to drive the board from any data source they can convert into animated pixels.

Unfortunately, we haven't gotten the software quite to the point where it is incredibly easy for third-party developers to write plugins; right now, they would actually have to integrate the feature into the program with an understanding of what needs to be called where in the UI thread and how it ties in with the display thread.  But, as we hope to have shipping for our Kickstarter backers wound down by the end of this week (finally!), there should be more time to flesh out this modular software design.

That being said, there was another challenge: despite actually developing this marquee with the idea of displaying stock quotes, there was the problem of actually finding a legitimate source of quotes that's free and not on a delay.  For those who haven't done this before and are searching Google for the first time, there are many money-hungry demons thickly spamming Google's search results pages with false promises of a useful product for my purpose.  And, despite that I have several brokerage accounts with various institutions, it's hard to find one that actually provides an API for stock quotes unless you meet the $25,000 minimum equity requirement usually required for day trading.  You might get lucky and find one that interfaces with DDE for ancient versions of Microsoft Office or Visual Basic for Applications, but it's been a very long time since I've ever touched any of these and don't want a service that requires a whole lot of other dependencies for the user to install.  The most general approach to take seems to be to provide an RSS feed reader.


The Best Solution For General Cases


Really Simple Syndication (RSS) is useful for quickly scanning sites for updates.  Most of the time, it is provided for news sites or sites whose content changes frequently.  Of course, stock quotes also change frequently; since RSS offers "pull style" updates (it doesn't get updated until you refresh), it works well with our protocol since there's no need to manage what particular symbol appears where and what to do with an updated price.  Sometimes, on days when one particular stock is trading volume an order of magnitude above others, you'll see tickers dominated with quotes from that stock.  Our mechanism won't do that because each ticker symbol is represented on each update, and updates occur each time the marquee is done scrolling all messages.

Python can easily parse RSS feeds by means of the feedparser module which you can install with pip.  Once you download the feed, you need to parse through its XML with xml.etree.ElementTree.  This is all pretty easy, but if you're using the NASDAQ feed in particular, you'll notice the quotes are embedded in ugly, unwieldy HTML.  It is difficult to parse because they do not provide unique identifiers as to what information is contained in what table cell, so you have to do a little bit of exploration ahead of time in order to see which cells contain what you want.  Here is how I'm currently handling the parsing, from end to end:

                try:
                    d = feedparser.parse(self.feedURL)
                except:
                    console.cwrite("There was an error in fetching the requested RSS document.")
                    self.active = False
                    continue
                info = []
                feed = "<body>%s</body>" % d.entries[0].summary_detail.value.replace("&nbsp;", "")
                tree = ET.ElementTree(ET.fromstring(feed))
                root = tree.getroot()
                # Find the last updated time
                last = root.find(".//table[@width='180']/tr/td")
                info.append("%s%s  %s" % ((setColor % yellow), last.text.strip(), endColor))
                # Find all the quotes
                counter = 0
                for elem in root.findall(".//table[@width='200']"):
                    for elem2 in elem.findall(".//td"):
                        for text in elem2.itertext():
                            idx = counter % 13
                            if idx == 0:  # Ticker symbol
                                info.append("%s%s " % ((setColor % yellow), text.strip()))
                            if idx == 3:  # Last trade
                                info.append("%s %s" % (text.strip(), endColor))
                            if idx == 5:  # Change sign
                                sign = text.strip()
                                info.append("%s%s" % ((setColor % (green if sign == "+" else red)), sign))
                            if idx == 6:  # Change amount
                                info.append("%s %s" % (text.strip(), endColor))
                            counter += 1
                # We're done parsing, so join everything together
                newMessage = globals.html % ''.join(info)
                # FIXME: For now, this will be Message #1
                globals.richMsgs[0] = newMessage


 Now, the next challenge was to come up with the means to integrate stock quotes with the usual message display thread.  Twitter is a unique case; since its API updates my app via push notifications, it can tell my app to save the incoming message into the next available message slot in the queue, and then when it's time for the serial output thread to refresh what goes out onto the matrix, any messages currently in the queue get shown.  Ideally, it'd be nice to find a stock API that behaved in a similar manner, despite that it'd expose us to possibly showing multiple quotes for the same stock in one run through the message queue -- there are ways we could work around this if needed.  However, since this is a pull-based notification, I needed a way for the serial output thread to signal pull-based threads to refresh their data.

There were, in fact, two ways I debated on while trying to develop this feature: 

  • Create an array of flags that the serial update thread raises before each text update; then, each respective class instance who registered a flag in this array will update its message in the appropriate slot
  • Tie a reference to a class instance or flag in with the RawTextItem objects (derived from Qt's QListWidgetItem) initialized whenever you make a new message in Raw Text mode; this would be empty for raw text and push-based notifications but would be populated for text requiring pull-based notifications, and would require the serial input thread to iterate over these items typically stored in the UI class instance


Ultimately, I settled on the first design.  A plugin developer would be required to know where to register the flag in either case, and I thought it'd be better to make that an array defined in the LEDgoesGlobals module rather than requiring people to pull in the UI class instance just to have access to that.  Also, they're not having to add extra data to something that gets displayed on the UI thread.  As you can imagine, my biggest pain points were simply refactoring & debugging all the little changes made throughout mostly the data structures that are used to pass the bits from the computer onto the marquee.

In the process of writing this big update to support pull updates between refreshes of the matrix, I also cleaned up code in the serial update thread that was iterating through the XML tree twice for no good reason other than just to put the letter(s) into the appropriate color data structure (i.e. red or green).  I also started to make this modular by defining colors in LEDgoesGlobals, but there are still many parts of other code that treats colors individually by name rather than agnostically (by simply sending a particular color data structure to a particular set of chip addresses).

As with most things, there is still some work left on this code before it's "perfect," but it's available on our GitHub right now if you are capable of running the BriteBlox PC Interface from source and would like to check it out.

Thursday, July 16, 2015

I Finally Found an Application For My CUDA Cores!

During graduate school, I was exposed to the power of CUDA cores through my parallel computing class.  Back then, there was a relatively small number of such cores on the video card inside their shared server, something like 40 if I remember correctly.  With my NVIDIA GeForce GTX 650 Ti video card, however, I now have 768 CUDA cores at my disposal -- almost 20 times as many as in grad class 4 years ago!

Not being much of a mathematician at heart, and generally spending time on logic problems, application testing, or new HTML5 & browser paradigms rather than crunching big data, I was never really inspired to do much with these cores.  This all changed when watching the Google I/O 2015 keynote address when they showed off the capability for you to draw (as best you can) an emoji, and Google's engine will try to recognize your scrawl and offer you up several profesionally-drawn emojis to represent whatever it is you're trying to express.  With recent changes in my life that have augmented my ability to "Go Get 'Em" and increased the likelihood that my ideas will actually reach customers, I immediately began scheming to learn how they set out doing this.  Obviously image analysis was involved, but what algorithms did they use?  Thinking back to my Digital Image Analysis class, I began researching how applicable Hough transforms would be to my problem.  I would need to teach the computer what certain symbols looked like in that particular mathematical space, which would probably take me a while since it's not really one of my strong points.  Another discouraging bit of trivia is that Hough transforms can be difficult to apply to complex shapes because there starts to become very little margin for error.  Well, scratch that; back to the drawing board.

Then, thinking back to Machine Learning class, one algorithm in particular seemed adaptable to all sorts of problems, and is even designed with the same (or very similar) scientific principles as human thought.  This particular learning algorithm has received quite a bit of buzz lately, with projects such as MarI/O and Google's "Inceptionism" experiments: neural networks.  With neural networks, you ultimately end up with (through some sort of black magic that occurs through repetitive training exercises) a series of very simple algebraic equations that will help you arrive at an answer given one or more inputs (it usually helps to have at least two inputs to make things at all interesting).  Through stacked layers of various sizes, each comprised of various quanta called "perceptrons" (which fulfill a very similar role to neurons), the neural network will begin to perceive features in a set of data in much the same way a human will analyze a visual scene and pick out all the items they can see.  There are many variables involved with coming up with a good neural network for a specific problem; for instance, the number of iterations you run training on the network, and the functions your perceptrons use when weighing inputs to make the final decision.  The neural network can also, unfortunately, be easily biased by the training data it sees during formation, so sometimes it can perceive things that aren't really there.

Given a set of data that could end up being very large, it became desirable to find a way to train the neural network using some sort of parallel framework, if possible.  Luckily, people have already solved this problem: NVIDIA has devised a library of primitives for neural networks (including Deep Neural Networks and Convolutional Neural Networks) called cuDNN.  Computer scientists at UC Berkeley have developed a DNN framework called Caffe, a highly-optimized neural network creator; it happens to support cuDNN, which you specify support for when you build it, and this takes its existing capabilities to a whole new, much faster level.


Getting My Caffe to Brew


Important note: This is all cutting-edge information, and is subject to change over time.  Some of the sources I used to put this article together are already slightly out of date, and so I expect this post will eventually go out of date too.  You've been warned!

Unfortunately, Caffe with cuDNN requires quite a few dependencies; these are all called out on this particular introductory post.  I chose to install some directly from source (by downloading the source or cloning from GitHub), and others were installed through Synaptic Package Manager on Ubuntu.  For this particular project, I installed the following binaries from the following sources:


Expected PackageInstalledMethod
CUDACUDA 7.0.28Synaptic
BLASOpenBLAS 0.2.14Direct download (see Note 2)
Boostlibboost-dev 1.54.0.1ubuntu1Synaptic
OpenCVOpenCV 3.0.0Direct download
protobuf (see Note 3)protobuf 3.0.0-alpha3 2.6.1Direct download
glogglog 0.3.3Direct download
gflags (see Note 1)gflags 2.1.2Direct download
hdf5libhdf5-dev 1.8.11-5ubuntu7Synaptic
leveldblibleveldb1 1.15.0-2Synaptic
snappylibsnappy1 1.1.0-1ubuntu1Synaptic
lmdbliblmdb0, liblmdb-dev 0.9.10-1Synaptic
And finally...
CaffeMerge 805a995 7d3a8e9, 7/3/15Git clone

Note 1: When making gflags, take a moment to go into the Advanced option of ccmake, and specify the CMAKE_CXX_FLAGS variable (how, you ask? read the next paragraph).  You need to set this variable to contain the compilation flag -fPIC thusly, or else later on, when you try to build Caffe, it will complain that the files you built for gflags aren't suitable to be used as shared objects by Caffe.

Note 2: For reasons unknown, I first tried to install it from a Git clone, but then ended up downloading this version directly and installing it successfully.

Note 3: At the time of this writing, you will run into trouble if you try to use the Python wrapper for exploring Caffe models if you build Caffe with protobuf 3.0.  Until this is fixed, use protobuf 2.6.1.

If you've never used cmake before, it's not very difficult at all.  At its heart, cmake facilitates making build instructions for multiple platforms in one convenient place, so that users of Windows, Linux, and Mac only need to tell it about certain paths to libraries and include files that don't already exist on their PATH or in some environment variable.  To set up your Makefile with cmake, the easiest thing to do is to go into the directory one level above cmake (e.g. caffe/, which contains caffe/cmake) and write ccmake . on the command line (note the two C's and the dot).  If you're into isolating new work, you may wish to create a build directory inside the project root directory, then run ccmake .. so that it's easy to trash all temporary files.

However, setting up the configuration for Caffe itself was not so easy for me.  After installing all the dependencies, the system just flat out refused to believe I wanted to use OpenBLAS rather than Atlas, so I ended up actually having to delete several lines of the Dependencies.cmake file -- specifically, the parts that specified which environment variables to read from if the user had specified Atlas or MKL -- as indicated by the "stack trace" being provided by ccmake.  Ultimately, not too difficult an adjustment to make; I just never have too much fun adjusting Makefiles by hand, so if it can be done through the configuration tool, I'd much prefer that.


Building a Useful Data Model


Once you have done all these steps to make Caffe with cuDNN, a great real-world example to run through is the "mnist" example which hashes through several thousand samples of handwritten numeric digits from the National Institute of Standards & Technology that were taken back in the early '90s (i.e. the MNIST database).  These scans are very low-resolution by today's standards, but are still often used as a benchmark for the performance of neural networks on handwriting samples (just as the picture of Lena Soderberg from a 1972 Playboy centerfold is still used as a benchmark for image processing algorithms, except with a lot less sexist undertones :-P).  Nevertheless, my machine took just under 4 minutes and 17 seconds to crank through a 10,000-iteration training cycle for a neural network that will classify image input as a digit.  The demo (linked to above) was very simple to run, as all of the work to create the neural network structure and the mechanism of the perceptrons was all done for me in advance; all I had to do was kick off the script that iteratively runs the training so it drills down on salient features distinguishing each digit from each other.  The only hangup was that some of the scripts expected files to be located in the ./build/ directory, but my particular installation skipped the ./build/ and went directly to the desired paths.

Putting the Model To Use: Classifying Hand-Drawn Numbers


After doing a bit of reading on how to extract the features from the neural network, I decided it'd be easiest to stick to the Python wrapper until I get some more experience with what operations exactly get run where, which is highly dependent on the way your deployment prototxt file is set up. One thing that would have been nice to know is the link seen in many places in the Caffe documentation that is said to describe how to use the Python module is wrong; they omitted a "00-", so it should really be http://nbviewer.ipython.org/github/BVLC/caffe/blob/master/examples/00-classification.ipynb.  On my environment, some Python dependencies also needed to be installed before the Python wrapper would run properly.  Here's what I had to do:

  1. for req in $(cat requirements.txt); do sudo pip install $req; done -- Installs many of the Python modules required, but leaves a little bit to be desired (which is accounted for in the next steps)
  2. Install python-scipy and python-skimage using Synaptic
  3. Uninstall protobuf-3.0.0-alpha3, and install an older version (in accordance with Caffe issue #2092 on GitHub)... would have been nice to know this ahead of time.  (Don't forget to run sudo ldconfig so you can verify the installation by running protoc --version).
  4. Rebuild caffe so it knows where to find my "new (old)" version of protobuf
Once my dependency issues were sorted, I managed to find the deployment prototxt file for this particular neural net in caffe/examples/mnist/lenet.prototxt.  Now, I can run the model simply by issuing the following Terminal command:

caffe/python$ python classify.py --model-def=../examples/mnist/lenet.prototxt --pretrained_model=../examples/mnist/lenet_iter_10000.caffemodel --gpu --center_only --channel_swap='0' --images_dim='28,28' --mean_file='' ../examples/images/inverted2.jpg ../lenet-output.txt


lenet_iter_10000.caffemodel is the trained model from the training exercise performed earlier from the Caffe instructions.  inverted2.jpg is literally a 28x28 image of a hand-drawn number 2, and lenet-output.txt.npy is where I expect to see the classification as proposed by the model (it tacks on .npy).  The channel swap argument relates to how OpenCV handles RGB images (really as BGR), so by default, the value is "2,1,0".  By carefully scrutinizing this command, you may notice two things:

  • The input image should be inverted -- i.e. white number on black background.
  • The input image should only have one channel.

Thus, before running my model, I need to make sure the image I'm classifying is compliant with the format required for this classifier.  For further confirmation, take a look at the top of lenet.prototxt:

input_dim: 64   # number of pictures to send to the GPU at a time -- increase this to really take advantage of your GPU if you have tons of pictures...
input_dim: 1   # number of channels in your image
input_dim: 28   # size of the image along a dimension
input_dim: 28   # size of the image along another dimension

You may be tempted to change the second input_dim to 3 in order to use images saved in the standard 3-channel RGB format, or even 4-channel RGBA.  However, since you trained this neural network on grayscale images, it will give you a Check failed: ShapeEquals(proto_ shape mismatch (reshape not set) error if you do this.  Thus, it's important the image is of single-channel format and inverted, as mentioned above.

Finally, so that classify.py properly handles the single-channel image, you need to make some amendments to it.  Take a look at this comment on the Caffe GitHub page for an explanation of exactly what you need to do; in short, change the two calls of type caffe.io.load_image(fname) to caffe.io.load_image(fname, False), and then use the channel_swap argument as specified above in the syntax.  However, you may just wish to hold out for (or incorporate) (or check out the Git branch that contains) Caffe Pull Request #2359, as this contains some code that'll clean up classify.py so you can simply use one convenient command-line flag --force_grayscale instead of having to specify --mean_file and --channel_swap and rewrite code to handle single-channel images.  It'll also allow you to conveniently print out labels along with the probability of the image being each category.

Now that you've been exposed to the deployment prototxt file and have an idea of what layers are present in the system, you can start extracting them by using this straightforward guide, or possibly this other guide if you're interested in making HDF5 and Mocha models.


Troubleshooting


Before discovering lenet.prototxt, I tried to make my own deploy.prototxt.  First, I utilized lenet_train_test.prototxt as my baseline.
  • If you leave the file as it is but do not initialize the database properly, you will see Check failed: mdb_status == 0
  • I deleted the "Data" layers that are included on phase TRAIN and phase TEST.  I am not using LMDB as my picture source; I'm using an actual JPEG, so I need to follow something along this file format:
    name: "LeNet"   # this line stays unchanged
    input: "data"   # specify your "layer" name
    input_dim: 1   # number of pictures to send to the GPU at a time -- increase this to really take advantage of your GPU if you have tons of pictures...
    input_dim: 1   # number of channels in your image
    input_dim: 28   # size of the image along a dimension
    input_dim: 28   # size of the image along another dimension
    layer: {
      name: "conv1"   # continue with this layer, make sure to delete other data layers

      ...
  • Delete the "accuracy" layer, since it's used in TEST only, and protobuf doesn't like barewords like TEST in the syntax anyway.
  • Replace the "loss" layer with a "prob" layer.  It should look like:
    layer {
      name: "prob"
      type: "Softmax"
      bottom: "ip2"
      top: "prob"
    }
    If you're simply replacing the loss layer with the new text, rather than removing and replacing, it's important to take out the bottom: "label" part, or else you'll probably get an error along the lines of Unknown blob input label to layer 1.  Also, just use plain Softmax as your perceptron type in this layer; nothing else.
  • Make sure you don't have any string values (barewords) that don't have quotes around them, such as type: SOFTMAX or phase: TEST.
  • If you have both the "loss" layer and the "prob" layer in place in deploy.prototxt, you will see Failed to parse NetParameter.  Again, be sure you replaced the "loss" layer with the "prob" layer.
  • If you forget the --channel_swap="0" argument on a single-channel image, and you don't have something in your code to the effect of Git pull #2359 mentioned above, you will see the message "Channel swap needs to have the same number of dimensions as the input channels."

Epilogue


Later on, as this algorithm gets closer to deployment in a large production setting, it could be nice to tweak it in order to get the best success rate on the test data.  There are some neural networks developed to classify the MNIST data so well that they have actually scored higher than their well-trained human counterparts on recognizing even the most chicken-scratch of handwritten digits.  It has also been noted that some algorithms end up getting significantly weaker performance on other datasets such as the USPS handwritten digit dataset.

More Information:


Thursday, July 9, 2015

Restoring the Granddaddy of Modern Computers: the IBM 5150

It Was a Dinosaur Back Then...


At some point a very long time ago in my life, I acquired an IBM 5150 PC from my grandfather.  I'm not sure why he wanted to give it to me at that time, but I did have fond memories of playing old games on 5.25" floppy such as Grand Prix Circuit and Wheel of Fortune with my cousins on hot summer days in Grandpa's garage in Houston (along with a similarly vintage Ferrari 308GTB which always remained under wraps -- I didn't even realize it was blue until after he died), so I was definitely happy to take it.  (The computer, of course. ;)  By most people's definition, the 5150 is the root of the modern personal computer, but in the late '90s when I received the machine, I did not have the right skills nor tools to get it up and working; moreover, with no expansion cards installed (no floppy disk controller nor video card in particular), it would not have been very useful nor even easy to triage and fix.  Fast forward 16 or 17 years since my last attempt, during which time I got degrees in Computer Engineering and Computer Science, and with a little bit of inclination from an outside event, now there's a newly-restored 5150 sucking gobs of power off the grid. :-P

This restoration project was spurred by the closing of a computer store in Arlington, TX called Electronic Discount Sales.  They have been on Pioneer Pkwy for nearly 30 years, but the owner has finally decided after all this time that he wants to "semi-retire," so he has been working on closing the store for several months now by trying to self-liquidate all of the remaining merchandise.  It is quite a large building, probably occupied by a grocery store in its former life before EDS moved in.  The thing that makes this a highly unusual case, though, is EDS contains stuff he received new 10 or even 20 years ago that still has yet to sell.  From a business perspective, we're surprised he's stayed in business this long.  But from a nerd perspective, it is amazing a place such as this still exists with all sorts of retro gadgets we used to enjoy throughout our lives.  Whatever we need to fix or to get further enjoyment out of an old computer or console system, he probably has.  Despite some of this stuff reaching its "knee" in the market (it has stopped losing value and is actually gaining value again), they still had some of these gadgets at shockingly low prices, especially in the video games section.  (To be fair, it did seem like they were asking a lot of money for certain other things they were selling, particularly the "old but not quite vintage yet" laptops.)  Also, for those who remember the earliest PCs, they had a Computer Museum devoted to this old technology, which is also in the process of being liquidated.  It is mostly from the Computer Museum that I have been able to restore my machine to something that works, at least in the most "BASIC" way.

A view of Electronic Discount Sales

A shelf of software from the 1990s

80% Off Everything tends to reveal all the obscure artifacts from an era in computing I'm sure no one misses...

After sitting in my grandpa's garage for a long time, the old relic sat in my mom's garage for yet another 16 or 17 years.  When I opened it up that long ago, it seemed impossible to have been the exact same machine I played all those games on -- after all, it wasn't that long ago back then that I was playing them, and yet this machine was pretty well stripped down and even seemed to be a different color than I remembered.  I didn't even feel confident in turning it on, so I left it alone until two days after visiting EDS, when Mom was able to dig it up from her garage.  After acquiring it, I started reading up on the machine and trying to learn its capabilities and what would make it tick.  Many sources pointed to a book called Upgrading & Repairing PCs, of which I have several editions at Mom's house (including the original edition), so the next day, I met her to get that original edition.  By then, I had about four different ways to confirm some important information:
  • The BIOS on the system is the 3rd iteration BIOS from 10/27/1982 (the most bug-free of them all, but still not great).
  • The system needs expansion cards to do anything useful, such as display video or read from a floppy.
  • The power supply in this particular machine is not stock, and that is a good thing.  The original PSU was very noisy yet only about 40% as powerful as the one provided to me.

The first thing a circuit-savvy individual might wish to do upon receiving an ancient circuit, especially one that has been sitting in Texas garages for most of its life, is to replace any electrolytic capacitors.  We did this on our Gold Wings pinball machine from 1986, and along with other electronic modifications, it now runs like a champ.  Electrolytic capacitors tend to dry up over time due to either low-quality manufacturing or heat stresses on their bodies which will crack the dielectric, let in moisture, and introduce "gremlins" (odd phenomena you can't explain or is hard to troubleshoot when using an electronic device).  There are 16 electrolytic capacitors inside the power supply, but thankfullly, none on the motherboard.  After making careful notes detailing capacitance, voltage, and placement, I sent +DoesItPew to Tanner Electronics in Carrollton to obtain the needed capacitors for the PSU as well as for the two 360K 5.25" full-height floppy disk drives (which I'll get to restoring later).  I spent two or three hours removing and replacing these capacitors, and just after midnight, began putting it back together and eventually tested it on an early SATA hard drive that still had a 4-pin Molex connection for power.  It fired right up like a champ, and the voltages coming out of other Molex connectors appeared to be correct!  Obviously, the capacitor replacement worked (ultimately I'm not sure if it was totally necessary, but for the purposes of doing a good job on a long-lasting restoration, electrolytic capacitors should get replaced).

The next step was to plug this PSU into the motherboard.  However, the inside of this computer was caked full of dust and dirt from the onset.  It is not a good idea to turn on a dirty machine, so I spent a while carefully removing the motherboard and one of the floppy disk drives, and then using a dry toothbrush to brush off all sorts of dust and dirt from the case and motherboard.  I followed everything up with squirts of compressed air, and also did all this work outside in order to keep the house clean and make sure the dust doesn't get a chance to resettle in my machine (as it was a windy day, so the dust particles would blow elsewhere).

With the PC now cleaned (and some of the chips having their shiny luster restored :-P), I plugged the PSU into the motherboard, prayed to the computer gods above, and flipped the big On switch.

Nada.

Not even a sound from the speaker.

Thanks to many folks who have been down this road before, there are some good troubleshooting guides for debugging problems starting an IBM 5150 PC.  The possible symptoms when an IBM 5150 does not beep are that you have a bad power supply or a bad motherboard.  Just because you replace capacitors in the PSU doesn't mean it's all good; again, heat stresses or dust can work "magic" on your circuits so they don't work as intended.  While the computer was running, I whipped out a multimeter and probed the power supply lines in order to assure myself that the voltages coming out of it are good.  Everything checked out within the operating specifications.  Then, I powered the system off and checked the resistance between various "hot" lines and Ground on the motherboard.  Again, all these values appeared within spec.  All this work proves that the power supply is good and there are no shorts on the motherboard.  It has been reported, though, that the tantalum capacitors regulating the power right by the PSU connectors can go bad and cause a short on the motherboard.  Luckily, I didn't need to rework a capacitor, but ultimately I did need to rework something that's much more of a pain -- you'll see later.


A Potentially Huge Time Sink


If the problem has been found not to be in the power supply, yet the computer does not beep a POST code to you, then it's either in the speaker or somewhere in the motherboard.  The speaker was measured to have the correct amount of resistance, and the cone was still in good shape, so that wasn't the issue.  This left the daunting task of finding out what was wrong on the motherboard.  However, there is a culprit far and away more likely than anything else: faulty memory chips.  The memory in the IBM 5150 is unreliable and often goes bad.  Toggling several DIP switches in order to try to adjust the memory got me nowhere, so I elected to remove all of the memory chips in banks 1-3 (bank 0 is soldered into the motherboard directly).  After this, the computer still wouldn't make any noise, so I probed several other things with the multimeter and experimented with some more DIP socket settings, also to no avail.

There is a technique known as "piggybacking" where you take a good chip and set its legs right onto the legs of the bad chip.  This is an unreliable method to triage a PC, though, as you probably don't know if the good chip is actually good, you don't know which bad chip it is, and it's not guaranteed to make the circuit behave as expected if the bad chip is not totally dead.  Nevertheless, I figured I'd give it a shot; it beats the alternative of having to order an obscure ROM chip, program it with a diagnostic tool that's notoriously buggy, and then make an adapter for it just so it fits in the original BIOS slot on the motherboard.  That sounded like an even bigger waste of time than just piggybacking, so I put a random memory chip on top of a random chip in Bank 0, and turned it on.

Nada.

I went back to double-check my DIP socket settings in order to indicate I had the absolute minimum amount of memory installed, and...


Voila!  It worked!



(In retrospect, this was actually a pretty good random guess, since the computer tends to appear dead if the memory fault occurs on the first two chips of Bank 0; I happened to pick the very first one, Bit 0 of Bank 0.)  The first signs of life out of this PC were the long-short-short beep code, indicating it is expecting a video card but did not detect one.  Immediately, I packed everything up and headed down to EDS in Arlington.

The liquidation sale has been going on for quite some time, so what's left of the inventory was rather disheveled.  I sifted through several buckets of ISA cards, but did not turn much up at all that was of the 8-bit ISA variety required for this PC.  I went back to their PC Museum area hoping to find any sort of useful PC card, and one of the associates helped me track down two IBM-compatible CGA/EGA cards to put into my machine.  One card, the Epson Y1272040002, was only $30.  The other, a Compaq Merlin "Enhanced Color Graphics Board", ran for $150.  Given the rarity of the Compaq card on the Internet -- it seemed like I had just stumbled across Unobtainium -- I ended up plunking down for both without much hesitation.  Fortunately, it turns out everything in their Museum is 35% off, so the total was just over $100 for both cards.  After spending about two hours searching that store, I think I found the last things of use to me from there.  What a sad day.  From EDS, I obtained a $65 Tandy CGA monitor and two video cards totaling about $120 (everything considering the 35% discount).  It's also amusing that I'm retrofitting the IBM PC with IBM-compatible parts, though simply due to supply issues more than anything.

After getting home and having some dinner, I tried both of the cards in the PC as-is, and neither of them seemed to do the trick.  The PC was still emitting the long-short-short beeps indicative of no video found.  I decided to take the simpler of the two cards (guess which one that was :-P) and switch its setting from Monochrome to Color.  Upon firing it up... just one short beep!  That's exactly what you want to hear.

I ran to get the CGA monitor from the other room, and plugged it in next to the PC.  Immediately, I was greeted with PC Basic, which is what you see when you don't have any working or enabled floppy disk drives.  This was enough for me, though; I was extremely satisfied with four days' worth of work after work.


Picture of the first video signal emitted from this PC in a very long time
And on the fourth night, the PC Gods proclaimed, "Let there be video!"


I spent the remainder of the night trying to conjure up my BASIC programming skills, yet incorporating some of the differences I had only read about when comparing original BASIC to the QBASIC I used when first starting programming in the late '90s.  One particularly amusing aspect is that you can move the cursor wherever you want to on-screen, so I altered the PC's greeting to say some immature things that were amusing until my program started to scroll the window deep into the depths of spaghetti code (what else are you going to write when you don't exactly know BASIC?).  Overall, I'd say that was pretty impressive to restore a 5150 in just a few hours a day over four days.


Can't just rest on your laurels...


Of course, it's not wise to trust a piggybacked chip for very long.  It needs to be soldered into the board eventually.  Over the long July 4th weekend, I took some time to desolder the bad memory chip from the motherboard and replace it with a DIP socket so that any chip that sits in that spot will be removable thereafter.  This process took a while because I went about it not by simply clipping the pins, but by trying to heat up the solder in each via, then pushing each pin to the center of its via.  After each pin was centered, and ChipQuik was applied to each via as well (bismuth lowers the melting temperature of solder), I would apply yet more heat to several holes at once and eventually managed to pry the chip out with a screwdriver.  Unfortunately, my IC extractor was too thick to navigate around some of the other socketed ICs, so I had to use a screwdriver (a more brutish, primitive method).  Also, when I was pushing each pin to the center of its via, occasionally I would push too hard with the tool and scrape off some of the protective coating on the traces surrounding the chip as I was trying to center the pins.  Next time I know a chip is dead, I likely won't even bother with all this trouble.

Once the chip was removed, I used a desoldering vacuum and solder wick to remove the old solder and bismuth, then set a new DIP socket into place.  It was soldered in with new lead-free solder, and one of the memory chips from Bank 3 was installed into place.  The old chip was indignantly thrown away.  I was very proud when the motherboard I had just reworked successfully powered on and booted to BASIC!

Now that the rework was successful, I took some time to notice the errors thrown up on screen just before BASIC would come up.  First, I was curious as to what "301" meant -- it turns out that the 301 error indicates a problem with the keyboard.  For some reason, I have to leave my keyboard unplugged until after the computer boots up, or else it initializes with the wrong data rate and sends a bunch of gibberish.  In any event (plugged or unplugged), I get the 301.  No big deal right now; I'll try it with one of my Model Ms and see how it goes.

Once I discovered that 301 was an error, though, it got me thinking about the "201" also displaying on my screen.  It turns out 201 is a lot more interesting, and indicates a memory error.  The specific memory error I was getting indicated there were problems with Bit 2, 4, 6, and 8 in Bank 1 (the message was 1055 201 -- 10 = Bank 1, and 0x55 = 0101 0101 in binary, where ones indicate problem bits).  This was because I had no memory installed in Bank 1 anymore, due to trying to isolate RAM problems, so I repopulated Banks 2 & 3 and booted once again.  This time, the machine was satisfied.

There are a couple bugs on the 10/27/1982 IBM PC BIOS that wreak havoc with the memory on the 64-256KB board (which is the one I have).  The first is that, due to a portion of a byte being set with an incorrect value when not all 4 banks of memory are enabled, the system multiplies the number of chips by the wrong number and seriously under-reports the amount of RAM installed in the system.  The second is that, for the same reason but in a portion of a different byte, the system tries to check much more memory in its POST initialization than what might actually be installed.  For instance, if Bank 1 is enabled, it will try to test memory in Banks 1-3.  When Banks 1 & 2 are enabled, it thinks there's so much memory that you would need the memory expansion card in order for all tests to pass.  Luckily, the expected behavior is exhibited when all 4 banks are enabled -- it runs the tests in exactly the 4 banks.  Based on these two bugs, it makes very little sense to run a 64-256K IBM 5150 with less than 256K of memory.

Nevertheless, I used these glitches to my advantage when testing the remainder of the memory chips.  It turns out that only the one chip at Bank 0 Bit 0 was bad, so I have been in contact with some of the local electronics stores to see if any of them happen to carry a suitable replacement.  Luckily, it turns out that technology has run in the bloodstream of the Dallas economy for some time, so I shouldn't be too far away from finding the chip.  However, I have other projects to tend to, now that this system is at least booting up happily...


Useful Sites


If you too are on a quest to restore an IBM PC, XT, or AT system, here are some good places:

Thursday, July 2, 2015

Annoy mobile & desktop users through push notifications through the browser!

To say the least, the new power present in the combination of Google Chrome, Google Cloud Messaging (GCM), and a new HTML5 concept called service workers has the power to help you develop robust applications involving push notifications without the requiring the user to install an app from the App Store.  Do not abuse it.  You will make your users angry, and when you do that, then you make me angry. :-P  Be careful when you devise use cases for the new processes outlined below.

Here's the scoop: GCM now works with a relatively nascent feature in modern Web browsers called service workers.  Service workers allow processes to continue running in the background even after a webpage is no longer open, thus allowing them to show push notifications, cache data, run background computations, and monitor various other system states.  Below, you will see the very basic mechanism for sending a plain push notification in the browser.  As you go along with this tutorial, you will build up more functionality and eventually make it quite robust through three different phases (Basic, Pizzazz, and Spreading Wings).

But First, An Important Note About Proxies


Some corporate proxies are relatively dumb, and block access to specific ports.
  GCM is configured to work using the XMPP protocol on ports 5228-5230.  Other corporate proxies are smarter and will block traffic based on the source IP.  In these events, you will need to disconnect from the proxy in order to receive the message.  This applies no matter how far along you are in the tutorial.

It also makes Spreading Wings a little bit difficult to pull off, as you will need to be on the proxy to load the initial page, off the proxy to receive the push notification, and then possibly back on the proxy in order to talk to the API that adds the extra data to your push notification, especially if your API exists on a server inside a corporate firewall.

Let’s Begin


  1. Pre-work:
    1. Install Node.js and npm (Node package manager).
    2. Set up an application in the Google Developers Console https://console.developers.google.com/.  In this application, enable the “Google Cloud Messaging for Android" API in the “API” -> “APIs & auth” section.  Then, in the subsection below (“Credentials”), set up a server API key for public API access.
  2. Install the following gulp modules (conveniently listed for easy copy-paste) into an empty directory (we’ll call this your server-root directory):
    1. For Basic: gulp gulp-connect
    2. For Pizzazz: body-parser child_process express proxy-middleware url
    3. For Spreading Wings: fs
  3. Create a new directory inside your server-root directory called dist.  Clone the Google push notification example repo https://github.com/GoogleChrome/samples/tree/gh-pages/push-messaging-and-notifications into dist.  This contains the user-facing Web app you will use to enable/disable push notifications, as well as the service worker that handles the background tasks.  The files you should get are:
    1. config.js - this is where you provide your GCM Server API Key in a dictionary called window.GoogleSamples.Config, key name is gcmAPIKey.
    2. demo.js - A file they included that provides extra logging features on the page and causes things to happen on load.  Probably not too important.
    3. index.html - This is your UI.  It needs to reference at least config.js, main.js, and your manifest, and contain the <button> one toggles to control push notifications (unless you want to modify all the UI management going on in main.js).
    4. main.js - Definitely the longest file.  Contains the JavaScript to initialize the service worker, plus the logic to run when the <button> is toggled.
    5. manifest.json - Permissions for your application.  Special parameters here include the gcm_sender_id (your 12-digit Project Number, as visible from “Overview” in your Project Dashboard) and the gcm_user_visible_only parameter (evidently Chrome 44+ won't need this parameter).
    6. service-worker.js - This file contains the two event listeners waiting for push (the signal from GCM) and notificationclick (an action to take when the user clicks on the notification).  Set up the appearance of your push notification here.  One good thing to do upon click is to actually close the notification (event.notification.close()), since apparently current versions of Chrome do not do this automatically.
  4. In case you glossed over the description of each file above, I shall reiterate the changes you need to make to two files.  First, tweak config.js so that the gcmAPIKey field is equivalent to the server API key you generated in step 1.2.  Also, tweak manifest.json so that your gcm_sender_id is equivalent to your application number, as seen on “Overview” in your Project Dashboard.
  5. Create your Gulpfile in the server-root directory so that you can serve up your new Web app.  In case you're not familiar with that, this is what your super-basic Gulpfile should look like now:


// gulpfile.js
var gulp = require('gulp');
var connect = require('gulp-connect');

gulp.task('default', function () {
    connect.server({
        root: 'dist/',
        port: 8888
    });
});

Time to try it!  Run your application by navigating to your server-root directory and running ./node_modules/.bin/gulp (usually I just make a symlink to this file in the server-root directory).

At this point, you can run your in-browser GCM push notification code from within Chrome 42 or newer at http://localhost/<port number you selected in the Gulpfile>.  Use cURL, as instructed, to send the push notification to your device.  As long as you downloaded your code from the GitHub repo and did not modify it except for adding your correct GCM server API key and Project Number, it should simply work with no further intervention.

Try it out for a little while, and you will quickly grow tired of its rather limited functionality.  I’m sure you would like to spice it up a bit more than just getting one generic push notification with predefined text set in the code.  Since support for embedding data into push notifications is as yet unsupported (unlike when using GCM on Android), you can add some pizzazz by wiring up an external server to provide custom displays based on exactly what registered device is receiving the push.  And, of course, you can set up such a server quite easily using Node.

  1. Make a new directory off the server-root called api, and cd into it.
  2. Make a server.js file that leverages express, body-parser, and http.
    1. To keep things simple, make routes that represent the GCM registration IDs you expect to serve to.  Since we’re building an API rather than a website, a “route” in the context of express will be equivalent to an API endpoint.  I made my endpoints in the form “/push/*”; this way, everything in the * is parameterized, and I am expecting the registration ID to exist in req.params[0].  Take this value and make some conditional logic that will return differing responses based on what value was provided.  Later on, you can take out this logic and pass the value directly into some sort of database that can help you generate the desired response.  That piece, however, is out of the scope of this tutorial since I’m trying to use Javascript only throughout this example.
    2. Make sure you set the port number to something different than where you are running the Web server for your application’s user interface.  I have chosen to run the UI at port 8888 and the API at port 8078 (for HTTP) / 8079 (for HTTPS); this avoids having to run gulp as root, which is required if you want to run the server on a standard port such as 80 or 443.
  3. Modify your service worker so it can parse the registration ID from the push notification.  You will use this as your “key” to look up exactly what the push notification will say.
  4. Modify your service worker so it can make requests to your new API, process them, and actually display different content based on the API’s response.
  5. Set up your Gulpfile to start this back-end server before launching the UI server.  Also, in your UI server connection logic, use proxy-middleware so that it appears to actually serve the API from within the webapp.  This will help you avoid errors with cross-site scripting.  Also, out of convenience, you should modify the default task to simply call two other tasks that individually start the API and UI servers; this will come in handy shortly.

For reference, this is what your server.js file should resemble:

var fs = require('fs');
var express    = require('express');        // call express
var app        = express();                 // define our app using express
var bodyParser = require('body-parser');
var http = require("http");

// ROUTES FOR OUR API
// =============================================================================
var router = express.Router();              // get an instance of the express Router

//***************************************************************
//Push data route (accessed at GET http://localhost:8888/api/v1/push/*)
//***************************************************************

router.get('/push/*', function(req, res) {
  user = req.params[0];
if (user == '[Google GCM user registration ID 1]') {
      res.json({"notification":{"title":"Check this out","message":"Here's your notification you asked for!","icon":"http://localhost/some-picture.png"}});
  } else if ... <etc>
});

// REGISTER OUR ROUTES -------------------------------
// all of our routes will be prefixed with /api
app.use('/api/v1', router);

// START THE SERVER
// =============================================================================
var httpServer = http.createServer(app);
httpServer.listen(8078);
console.log('Magic happens on port 8078');

And this is what your Gulpfile will look like, assuming you split your tasks:

gulp.task('connectApi', function (cb) {
  var exec = require('child_process').exec;
  exec('node api/https-server.js', function (err, stdout, stderr) {
      console.log(stdout);
      console.log(stderr);
      cb(err);
  });
});

gulp.task('connectHtml', function () {
    var url = require('url');
    var proxy = require('proxy-middleware');
    var options = url.parse('http://localhost:8078/api');
    options.route = '/api';

    connect.server({
        root: 'dist/',
        port: 8888
        middleware: function (connect, o) {
            return [proxy(options)];
        }
    });
});

gulp.task('default', ['connectApi', 'connectHtml']);

At this point, make sure to stop gulp and node so that your original server is torn down.  Close Chrome.  Restart gulp (or at least the UI portion), and then restart Chrome.  Navigate to chrome://serviceworkerinternals and make sure nothing is listed there.  If there’s something present, try to remove it and then restart Chrome again.  (I haven’t figured out how to more conveniently refresh service workers; it’s kind of a hassle.  However, it's possible that Chrome Canary does an even better job at discarding old settings than does regular Chrome.)  Now, navigate to your UI and re-enable push messages.  Copy down your device’s new registration ID and put that into your API server.  Unfortunately, this will require yet another restart of the API server (or at least starting it, if you didn’t do so already).  However, upon using cURL as instructed to send the push notification to your device, it should now be serving you custom content through your API!

Spreading Wings


Now I’m sure you want to take your demo beyond localhost and put it on a real server somewhere where you can access it from any computer on your local computer or possibly the Internet.  Google requires that push notification apps out in the wild that leverage GCM through web workers be served through HTTPS, so you will need a certificate.  If you are lucky enough to have an SSL certificate through a trusted authority for your Web site, then you should be able to make very simple modifications to your server and Gulpfile in order to run the app on a real domain name.

On the other hand, if you’re cheap or just doing this for testing, there are two ways you can go about serving your app over HTTPS while still using the API server on a separate port.  First, you can continue to use proxy-middleware to serve the API through a path on the UI server, or you can separate those two out and make a request directly from one server to the other.  Both require you to create your own self-signed certificate anyway, so let’s think of the benefits & drawbacks of each implementation:

Sticking with proxy-middleware:
  • +1 You’ve already made it this way, so why change now
  • -1 It requires configuration changes that need to be removed when you plan to put your app in production


Ditching proxy-middleware:
  • +1 The configuration change allows just one cross-domain server to make requests, rather than letting through any unscrupulous certificate
  • -1 You’ll have to take out a lot of logic you wrote to get your app Pizzazz’d
  • +1 The changes actually reduce dependencies on libraries and make the Gulpfile a little bit shorter again


So, without any further ado, here’s the security pre-work you need to do, depending on what route you want to take:

Way 0: You already bought an SSL certificate that verifies your domain name.


I’ll tell you how to make your UI and API server secure below.

Not-Way-0 Prerequisite: Make your certificate(s)


Make your security certificate.  Hopefully you have access to OpenSSL on your machine.  Run this command:

openssl req -newkey rsa:2048 -new -nodes -x509 -days 3650 -keyout key.pem -out cert.pem

This will generate a certificate suitable for your PC or Mac.  Fill out each of the prompts.  For the Common Name, I used the FQDN of my server computer.  The instructions for installing it vary across different OSes.  Then, run:

openssl pkcs12 -export -out cert.pfx -inkey key.pem -in cert.pem -certfile cert.pem

This will generate a certificate compatible with Android in case you’re interested in trying your demo on a real mobile device.

Way 1: Keep using proxy-middleware


In the realm of HTTPS, the server and the client need to present certificates.  The browser will complain if the server & client present certificates with the same Distinguished Name (DN). However, notice the code in service-worker.js; the fetch() command does not provide you with a place to provide a client certificate.  Thus, you need to add a setting into the connectHtml task of your Gulpfile so that Node will ignore certificates with errors:

process.env.NODE_TLS_REJECT_UNAUTHORIZED = "0";

Note this is normally set in the settings object you pass to https when you initialize it (the key is rejectUnauthorized), but for some reason, I couldn’t get that particular key to work; setting the environment variable programmatically is what did the trick.  When you run your server in production, you will definitely want to remove this line (as it should reject unauthorized certificates by default).

Way 2: Ditch proxy-middleware and make the request from your service worker directly to the API server.


Due to many years of devastating cross-site-scripting (XSS) attacks perpetrated on Internet users, browsers & servers typically forbid a script on one site to request content from another.  Node.js makes it fairly easy to whitelist your service worker by allowing you to set the Access-Control-Allow-Origin HTTP header in the response.  Make sure to set this to the exact host and port that your service worker lives on, or else the request will fail (you will see the failure in the console of the service worker).  Here’s a short code snippet of how you do that:

router.get('/push/*', function(req, res) {
    user = req.params[0];
    res.header("Access-Control-Allow-Origin", "https://yourhost.yourdomain.com:8888");
    etc...

Changing your application code to take advantage of your security pre-work


Ok, now that we’re done with the security pre-work, here is how you set all that up in your application code.

  1. Your API server should now rely on the https module rather than http.  This is a simple change; just change require("http") to require("https").  You will feed it a dictionary of secure settings, including your private key, certificate, CA, and an option called requestCert that will ask the browser for a certificate.  Note that If you did not self-sign your key, it might not be necessary to include the CA field.
  2. If you kept proxy-middleware, be sure to add that programmatic environment variable setting for NODE_TLS_REJECT_UNAUTHORIZED as mentioned above.
  3. If you ditched proxy-middleware, make sure to set the Access-Control-Allow-Origin HTTP header in each response you return from the API server (see example above), or else the response will never be properly received by the service worker.
  4. Your Gulpfile will also need to support starting the UI server with HTTPS support.  See the code snippet below.  Again, if you purchased your certificate, you might not need to provide the CA field.


Here are the important changes to your server.js file:

var privateKey  = fs.readFileSync('./key.pem', 'utf8');
var certificate = fs.readFileSync('./cert.pem', 'utf8');

var secureSettings = {
  key: privateKey,
  cert: certificate,
  ca: certificate,
  requestCert: true
};
...
var httpsServer = https.createServer(secureSettings, app);
...
httpsServer.listen(8079);
console.log('Magic happens on ports 8078 (HTTP) & 8079 (HTTPS)');

Make the following modifications to your gulpfile's connectHtml task:

    process.env.NODE_TLS_REJECT_UNAUTHORIZED = "0";

    connect.server({
        root: 'dist/',
        port: 8888,
        https: {
            key: fs.readFileSync(__dirname + '/api/key.pem'),
            cert: fs.readFileSync(__dirname + '/api/cert.pem'),
            ca: fs.readFileSync(__dirname + '/api/cert.pem'),
            rejectUnauthorized: false
        },
    ...

Before using your new secure site on any devices (including your localhost), you will need to indicate trust of your new certificate and CA (assuming you didn’t just buy one outright).  The steps to do this differ across operating systems, so I will not go into that here.  Just note that telling your browser to ignore errors with the security certificate will not solve your problem, as the GCM service will be disabled until all the trust issues are fully worked out.  If you’re having trouble moving your certificate to another device, remember you can set up a file server cheaply with express too.  To serve files from a particular directory, just add this:

app.use('/files', express.static('path/to/files'));

Now when you visit http[s]://<hostname>:<port>/files/ (note the host name & port number pertains to your API server, not your UI server), you will have access to any files in the designated path on your system.  However, this will not help you if you’re running Chrome browser on iOS, as it manages its own trusted store and there’s no easy way for you to modify it.

And finally, it should go without saying, but in case you forget: restart your servers and restart Chrome to clear out your old data!

Two things that might help you troubleshoot problems getting an external device talking to your server:
  • It is possible that you need to sign into your Google account in order for this to work.
  • If nothing seems to be happening, check your other device carefully (especially if it is a phone).  Phones might only show you the prompt to "accept push notifications from this website" if you are scrolled up to the top of the page.  Of course, you have to accept that if any of this is to work.


Customize Notifications Per User


Earlier, you saw how to make your own backend Node server to serve custom data for your push notifications.  However, upon implementing this is the "Spreading Wings" context (i.e. once you started accessing the services and receiving pushes on multiple devices), you probably grew tired with seeing the same push notification appear on every device after a while.  Here is how you mitigate that:

  1. Use the Promise-returning getSubscription() method to get the Subscription object.
  2. Upon receiving the result of that operation, you get a lot of details about the subscription for the device that just received the push notification.  Typically, you would want to access the actual subscription ID from the return value's subscriptionId parameter, but (as described in the comment below), that method didn't seem to exist when I tried this for myself in Chrome Canary 45.  Since this code is currently being used with GCM only, it is OK to make some assumptions about the endpoint URL to help parse it so you can get the subscription ID.
  3. Pass this subscription ID along to your API, which can then serve custom data tailored to each specific device registered with it.
See the code example below.  This goes into your service worker (notice where the call to fetch() is, now inside the return function for getSubscription()):

self.registration.pushManager.getSubscription().then(function(pushSubscription) {
  // The usual code for getting the subscription ID
  // (PushSubscription.subscriptionId)
  // returns null, so do some string parsing on the "endpoint" returned
  var subscrID = pushSubscription.endpoint.split("/").pop();
  // Wait for the HTTP REST call to come back with a response
  fetch("https://yourhost.yourdomain.com:8888/api/v1/push/" + subscrID).then(function(response) {
    ... <your own logic>

Epilogue


Keep in mind that service workers are not widely supported across the World Wide Web today.  Currently, the functionality only exists in Chrome 42+, Opera 29+, Chrome Mobile 42+ (but not Chrome for iOS because it does not support service workers), and is an experimental feature in Firefox that must be turned on manually.  Over time, you can check on the supportability of service workers at this website: http://caniuse.com/#feat=serviceworkers

The above code is great for demonstration purposes.  While it is probably robust enough to take into production, you may wish to consider using a tried and true backend framework if you are working on an enterprise-scale project.  (Or, maybe just buy a bunch of time on the cloud and use a really good load balancer.)


Source: https://developers.google.com/web/updates/2015/03/push-notificatons-on-the-open-web?hl=en