HTML5, CSS3, jQuery, JSON, Responsive Design...

Develop React Native apps on a Chromebook

Michael Brown  October 22 2017 09:08:14 PM
Surley it's not possible to develop Android and iOS apps using React-Native on a Chromebook, right?  Well, it is!  Kind of...if you cheat a bit!

First off, you'll need to enable Developer Mode on your Chromebook and then install a full Linux desktop environment, either as a dual boot or as a crouton (allows Chrome and Linux to run side by side).  See https://github.com/dnschneid/crouton for instructions.

Next, you'll need to install Node and NPM in your Linux environment.  When you've done that, you can install the Create React Native Application (which I'll abbreviate crna from here on in) tool via NPM.  See here for instructions on how to do that: https://facebook.github.io/react-native/docs/getting-started.html

Once you have a crna app all set up, you'll be able to have it appear on your iPhone or Android phone via a mobile app called Expo, which you'll need to download from your respective app store.  Your phone will need to be on the same wifi network that your Chromebook is on in order for this to work though.

There's one more important step for this all to work on a Chromebook though: you'll have to open the Chromebook's firewall settings to allow the two ports that Expo needs to work.  These are ports 19000 and 19001.  Here's how to do that:

Opening Chromebook Firewall Ports
  1. On your Chromebook, open Crosh tab (Ctrl->Alt->t)
  2. Type `shell` at the Crosh prompt
  3. Enter following two lines at the prompt
sudo iptables -A INPUT -i wlan0 -p tcp --dport 19000 -m state --state NEW,ESTABLISHED -j ACCEPT
sudo iptables -A INPUT -i wlan0 -p tcp --dport 19001 -m state --state NEW,ESTABLISHED -j ACCEPT


That should have opened the two ports on your Chromebook firewall.  Note: these settings are not sticky.  The next time you restart your Chromebook, they'll be forgotten.  (This is actually a good thing, security wise.)   You'll just need to enter them again before starting your Linux crouton.

Now in your Linux crouton, kick off your crna bundle within it, e.g. `yarn start`, then try to connect via your phone’s Expo client (e.g. by scanning the code).

You may get a red screen, saying “Could not connect to development server” at this point.  If so, check the console on your Linux crouton and see if it’s finished building the JavaScript bundle yet.  Chances are that this will take a minute or two, unless you have a super new Chromebook Pixel!  Most Chromebook's processors won't have enough grunt to finish the compiling task that quickly, so you just need to wait a bit.  If crna is still building the bundle, wait for it to finish and then hit the reload link on your phone’s screen.


Obviously, this only gets you so far with development.  You're still limited to using the Expo app on your mobile device.  If you want to check it out natively on a device, then you have to eject from the crna environment.  You'll then need a Mac to put your iOS directly onto your iPhone/iPad, and to roll it out to the App Store.


Getting npm-check-updates to update global packages

Michael Brown  August 19 2017 09:34:25 PM
I often use the npm-check-updates package to check the whether my npm packages are up to date.  Particularly, my globally installed npm packages, such as...well, npm-check-updates itself!!!

It does have one rather annoying limitation though: -u option, which forces the packages to upgrade, only works with locally installed (i.e. at a project level) packages.  It won't work for your global packages.  So if you issue this command in your terminal (NB: ncu is an alias to npm-check-updates):

ncu -g -u


you'll see this error: ncu cannot upgrade global packages. Run npm install -g [package] to update a global package.  This is a pain, because the npm install -g command requires you specify the packages' names, and there might be a few of these.  For example, when I just ran ncu-g on my home Mac, it flagged the following packages as out of date:

create-react-app         1.3.3  →  1.4.0
create-react-native-app  0.0.6  →  1.0.0
eslint                   4.2.0  →  4.5.0
eslint-plugin-react      7.1.0  →  7.2.1
jsdoc                    3.4.3  →  3.5.4
npm                      5.0.3  →  5.3.0
prettier                 1.5.2  →  1.5.3


To update them all, I would have to type out npm install -g followed by the full list of the packages' names.  That's a lot of typing and/or copying and pasting to get the package names that I need.  So I Googled around to see if there was a way of formatting ncu's output to produce a more update friendly list.  This is the command that I found (sorry, I can't remember where now!):

ncu -g | awk '{print $1}' | paste -sd " " -


Issuing the above command gave me straightforward list of packages to update, like so:

"create-react-app create-react-native-app eslint eslint-plugin-react jsdoc npm prettier"

All I need to do is still an npm install -g in front of that string, and I'm away.

It's not a particularly easy command to remember, so you may want to put it in a script and run it from there.



    Random User Generator

    Michael Brown  July 16 2017 02:19:03 AM
    Hey, we all need one of these at some point in our Development careers!!

    The Random User Generator is, to quote their blurb: "A free, open-source API for generating random user data. Like Lorem Ipsum, but for people".  Just make an Ajax call to their API, and back will come a list of random users in JSON format (XML, CSV, or YAML).

    I came across this handy resource while checking out Spencer Carli's React Native Flatlist Demo on Github.  Here's a screenshot of that running in the iOS Simulator.

    Image:Random User Generator
    The data shown here is being pulled from the Random User Generator by an Ajax call.

    Hide the ***** footer on Medium.com posts!!

    Michael Brown  March 14 2017 01:47:38 AM
    I'm a big fan of the Medium publishing platform, but hate the way that it (and its various offshoots) display a fixed footer bar.   You know, the one that says "never miss an update from [whoever]", and that has a *Get Updates* button on it?    Yeah, guys, I've got that now.  So can I please close the darned thing and get some of my screen real estate back?  It seems not; Medium doesn't give you a way of doing that.


    Add Footer Close Button Extension

    So, I put together the Add Footer Close Button extension for Google Chrome.  It adds a button that hides that footer bar whenever it detects one.  That's all it does.  There's no options.

    It currently works on medium.com itself, and also hackernoon.com.  I'll add more offshoot sites as I come across them.  (It's just a matter of updating a permissions file.)



    Medium.com footer showing my close button

    React without build steps - Babel Standalone

    Michael Brown  January 19 2017 02:00:05 PM
    When I got started with React, over two years ago now, we had the JSX compiler tools.  All you had to do was load this JavaScript library into your HTML file, along with the React JavaScript library itself, of course, and you were done.  All of the React transpilation took place on the fly, within the browser itself.  Easy!

    I don't know what I'd have done if I'd been told that to start with React you need to install Node and NPM, then spend two hours fiddling about with your Webpack config file before you can write a line of your application code.  Something tells me though that I would not have had a positive reaction!  Unfortunately, that's largely what new developers are told to do now, since Facebook deprecated the JSX Tools in favour of Babel, and then the latter dropped their own in-browser transpilation tool (browser.js) in Babel 6.

    Fortunately, help is at hand in the shape of Babel Standalone.  To set it up, you simply add the babel.js (or babel.min.js) library to your HTML as a script tag, just after your main React and ReactDom script libraries.  Then, when loading your own JavaScript library (see simple-list.jsx in the code below) you set it type to "text/babel", then add any transpilations in the script tag's data-presets property.

    That's it; you're done!!:
    <!DOCTYPE html>
    <html>
    <head>
    <meta charset="utf-8" />
    <title>babel-standalone Hello World example</title>
    </head>
    <body>

    <div id="main"></div>
    <script src="//cdnjs.cloudflare.com/ajax/libs/react/15.4.2/react.min.js"></script>
    <script src="//cdnjs.cloudflare.com/ajax/libs/react/15.4.2/react-dom.min.js"></script>
    <script src="//cdnjs.cloudflare.com/ajax/libs/babel-standalone/6.21.1/babel.min.js"></script>

    <script type="text/babel" data-presets="es2015, react, stage-2" src="js/simple-list.jsx"></script>
    </body>
    </html>



    Here's the contents of that simple-list.jsx file:
    const MyListElement = ({ listElement }) => (
    <li>{listElement}</li>
    );

    class MyList extends React.Component {
    render() {
    const { listArray } = this.props;
    const list = listArray.map((member) => {
    return <MyListElement listElement={member} />;
    });
    return(
      <div>
              <h2>Man in the High Castle Characters</h2>
              <ul>{ list }</ul>
      </div>
      );
    }
    }

    var myObj = window.myObj || {};
    myObj.listArray = ["Joe Blake", "Juliana Crane", "John Smith", "Frank Fink"];
    myObj.title;

    ReactDOM.render(
    <MyList
          listArray={myObj.listArray}
          title={myObj.title} />,
    document.getElementById("main")
    );


    It's a very simple app, which just displays a list of characters from Amazon's superb adaptation of Philip K Dick's The Man in the High Castle.  I've implemented two React components here: MyListElement as a function component, and MyList as a full ES6 class.  (I know that  both could have been implemented as function components, since they don't require any React life cycle hooks, but I wanted to show how Babel Standalone handles both component types.)  Demo here.

    You don't get module bundling/loading, of course, because Babel is not a module loader; you'll need Webpack or Browserify for that.  So you'll have to manage the script tag order yourself, and that's a pain.  But for getting devs started, that's not a show stopper in my book.

    Angular Release Candidate 5 - "major breaking changes"???

    Michael Brown  August 31 2016 05:13:55 AM
    Who’d be an AngularJS Developer?  Well, quite a lot of people if the stats are to be believed!  But oh Lordie, they really seem to be having a really rough time it with the upgrade to Angular 2.

    I've just listened to a recent Adventures in Angular podcast, entitled Angular RC5 and Beyond.  I’m not much fan of a fan of Angular, as you can probably tell, but I like to keep up with it anyway.  If for nothing else, it’s good to have reasons to show people/bosses that moving to Angular would be a truly terrible idea.  The Angular 2 rollout is giving me plenty of those!

    Of Release Candidates

    Anyway, RC5 refers to Angular Release Candidate 5.  "Aha", I thought; "they must be pretty close to release if they're on a fifth Release Candidate!"  However, I was disabused of this thought within the first few minutes, in which we’re told that Release Candidate 5 of Angular 2 contains “major breaking changes from release candidate 4”.

    Say what?  Major breaking changes in Release Candidate?  Thankfully, a couple of Google people are on hand to explain that in GoogleLand, things work a little differently.  A Release Candidate isn't a candidate to be release, as in the gold release; you know, how the term is applied by just about every other software company in the world.  No, it seems that for Google, Release Candidates are actually the first serious test releases geared towards public consumption.   Alphas and betas are mainly for internal testing, within Google only.  Angular 1 had seven Release Candidates, apparently.   Well, that's one approach, I suppose.

    There's a telling moment about half way through the podcast.  As one of the Google guys is detailing change after change in RC5, the podcast host pauses proceedings to ask one of the other participants why he hasn’t spoken much yet.  “Oh I’m just sitting here all grumpy, thinking of all the code I have to change...across hundreds of projects”, comes the reply.  Quite so.  And I don't think he was referring to Angular 1 code, either.


    NG Modules

    One of the big new things in RC5, apparently, NG Modules.  These are a way of breaking up your application in some more manageable fragments or components.  (So like React has had from the get go then.)  It seems that Angular 1 had some kind of modules thingy in it too.  These were originally removed for Angular 2, but they’re back now.   Only they’re not quite the same as they were in Angular 1, but "it helps if you think of them that way”.


    Webpack

    Almost as an afterthought, the Google guy drops another bombshell during the podcast's closing moments:  "did I mention that Angular 2 is now moving from SystemJS to Webpack?", he asks, laughing at what I took to be a joke, at first.  But no, he was serious: they really are moving to Webpack.  That may be all to the good, because Webpack rocks, IMHO.  But really, they want to be making a change like that in the fifth Release Candidate?  (Oh, I forgot; they're not really really Release Candidates, are they!)


         

    Goodbye, Chromebook, hello...Chromebook!

    Michael Brown  August 21 2016 12:21:34 AM
    It was my birthday last week.  One of my treats was a brand-new Toshiba Chromebook 2, bought to replace my aging Samsung model.

    The latter has slowed down to the point of being barely useful.  To be honest, it was probably underspecced when I bought it three years ago, having an ARM processor and only two Gig of RAM.  But the truth is that Intel processors at the time simply could not mach the battery life of the ARM processors:  the Samsung could give me over 8 hours of battery, which I'd never seen a laptop before!

    However, that ARM processor and also came with some limitations, which I hadn’t appreciated when I bought it.  For one thing, some Chrome Apps didn't even run on the ARM version of the Chromebook; they would only run on Intel versions, which was something of disappointment.  Maybe that’s less of a problem today.

    Full Linux Install

    Another problem was with the full Linux installation that I’d always intended to put on any Chromebook that I bought.  (With a Crouton-based install, you can switch instantaneously between ChromeOS and a full Linux install, which is a pretty neat trick!)  What I hadn’t realised though was that ARM versions of a some Linux packages simply aren’t available.  Most of the biggies are present and correct, e.g. Chrome, Firefox, LibreOffice, Citrix Receiver, GIMP, as well as developer packages, such as Git, Node/npm and various web servers.  But the killer was that there’s no SublimeText, boo hoo!  SublimeText maybe cross-platform, but it’s not Open Source, and its makers have showed zero interest in making an ARM-compatible version so far.  Sadly, I was never able to find a truly satisfactory replacement for that one.  I finally settled on the Chrome-based Caret editor, which does a half decent job, but it’s no SublimeText.


    The New Toshiba Chromebook 2

    Intel had to raise its game to respond on the battery life front, and give the Devil its due, that’s exactly what Intel did.  Battery life is now on a par with the ARMs, but with the benefit of extra power and also that Linux package compatibility.  For example, here's Sublime Text running in an XFCE-based Linux installation, in a Chrome window on my new Toshiba Chromebook:

    SublimeText on a Chromebook


    Other benefits of the Toshiba over the Samsung:
    • More powerful (Intel processor & double the RAM) so much faster performance
    • Much better screen: full HD 1920x1080 vs 1366x768 on the Samsung
    • Amazon.com delivers to Australia!!  And likely to other countries too.  (Good luck finding any decent Chromebooks actually on sale in Australia!)

    Local Storge

    Local SSD storage is the same on both models: a disappointing 16Gig.  You'll often hear ChromeOS aficionados telling you that local storage doesn't matter "cos' on a Chromebook you do everything in the cloud".  IMHO, that's a bunch of crap.  Local storage is important on a Chromebook too, especially if you have that full Linux install eating into it!!

    Now both of my models do come with an SD card slot, which allows me to boost that storage space significantly, and at no great cost.  But it's the Toshiba that shines here too, as you can see from the two photos below:

    Samsung Chromebook with SD CardToshiba Chromebook 2 with SD Card

    In both of these photos, the SD card is pushed in to its operational position, i.e., that's as far in as it will go.  See how far it sticks out on the Samsung?  What's the chances of my throwing that in my bag and then retrieving it a few hours later with the card still in one piece?  Not high, and that's why I never do it.  It sounds like a small thing, I know, but it's a royal pain the rear to fish around for the SD card in my bag whenever I need to use it.  With the new Tosh, the SD card sits absolutely flush with the edge of the case, so I can leave it there all the time, giving me a permanent 48 Gig of storage!!


    That Other OS

    The cost of this new baby?  $300 US on Amazon.com, which translated to just over $400 Oz, including postage.

    At which point I have little doubt that somebody is waiting to tell me "but for that kind of money you could have got a proper laptop that runs Windows apps".  But as you've probably worked out by now, I already know that.  And if I'd wanted to get a Windows laptop, then I would have got one.   The thing is that I don't like Windows much.  I don't like the way it works (or doesn't work), and most of the dev tools that I now live and breath don't work natively on Windows.  (Although there is, apparently, a native Bash terminal coming in Windows 10 at some point, courtesy of Canonical.)

    And what kind of Windows apps would a $400 Oz machine even be able run?  Microsoft Office?  It might run; as in it might actually start up.  Adobe Photoshop?  Ditto.  And how about all those Windows games?  Well, I suppose you might coax the new Doom into a decent frame rate, as long as you were prepared to compromise a little on the graphics!

    Doom (circa 1993)

    Domino server up time: eat this, Microsoft!

    Michael Brown  August 19 2016 02:46:25 AM
    There are some things that we just take for granted.

    I have this Domino server in the cloud, on Amazon Web Services.  It just occurred to me that I hadn't updated the Amazon Linux that it's running on for a while now.  So I logged in to check it out and I was right: it has been a while.  517 days, in fact!

    That's 1.42 years.

    Or one year and five months, or thereabouts.
    Domino server uptime
    In fact, it would likely have been a lot longer of that, if I hadn't take it down to upgrade it to Domino 9.0.1 in the first place.

    You know what?  I think I'm just going to leave it as is, and see how long it goes for...

    NodeJS posting data to Domino

    Michael Brown  August 13 2016 02:09:17 AM
    So recently, I was working on project that was not domino based, but rather used web tools and Rest APIs.  What a breath of fresh air!  SublimeText, NodeJS, EsLint and all that other webbie-type goodness, that looks great on your CV.

    Moving back to working with our Domino-based CMS (Content Management System), I came down to Earth with a very rude bump.  You see, in that system, we store our web programming content in Notes Documents.  Our HTML,  JavaScript and CSS is either typed/pasted directly into Notes Rich Text fields, or is stored as attachments within those same Notes Rich Text fields.

    Not to criticise that CMS system itself, which happens to work rather well, as it happens.  It’s just the editing facilities, or lack thereof.  Typing text directly into a Rich Text field, you have no syntax checking, no linting, no colour coding: no visual feedback of any kind, in fact.  Not even of the limited kind that you get with the JavaScript Editor in the Notes Designer.

    So I was faced with a choice:
    1. Go back to typing stuff directly into Notes fields, and finding my coding errors the hard way, i.e. when it fails in the browser.  Not fun.
    2. Use SublimeText/EsLint etc to get the code right on my hard drive, then copy and paste the results to the Notes field so I could test in the browser.  And kid myself that the last step isn’t a complete and utter productivity killer.

    Obviously, neither option was particularly appealing.  Which got me to thinking… now, wouldn’t it be great if I could still use of all those achingly trendy, client webbie-type tools, but have the my code sync automatically synched up to my Notes Rich Text fields on the Domino server?  You know, in real time?  Then I’d have the best of both worlds.  But surely, not possible…


    Actually, it is very possible (otherwise this would be a very short post!).  And I have built a system that does exactly that.  It’s based on NodeJS, npm on the client-side and a good old Notes Java agent on the server side.


    Basic Approach

    So here's the basic division of work between the NodeJS client and the Domino server:

    Client/server Sequence diagram

    (Sequence diagram created with PlantUML.)

    The NodeJS client gathers up the user's source file, transpiling it if necessary, and posts it to a Domino agent as part of an encoded JSON object.  (Yes, I know JSON is actually a string, but I'll call it an object here.)  The agent works out where the target document is, based on the data passed in the JSON object.  It them posts the user's decoded data to a Rich Text field on that document (or attaches it), before sending a success or error message back to the client.  The agent runs a Web User agent, so ID and Domino HTTP Password are passed from client to server (not shown in diagram above).

    The NodeJS can client can even be set to run in the background, and watch a file on your hard drive - multiple files, in fact - watching for if those files have been changed on your hard drive.  If detects a change, the Node system can post the changes to the Domino server immediately.  You can refresh your browser a couple of seconds later, and your changes are there, on the Domino server.

    This isn't theory.  I have a working system now, that does exactly what I describe above.  I will post source code to Github if anybody's interested, but in the mean time here's a few tasters of how things are done.


    Posting Data from the NodeJS Client: Request Package

    The key to posting data from client to server is the npm Request package.  This is kind of a equivalent of jQuery's Ajax call, only in a NodeJS terminal instead of in a browser.  The code below shows how you might call request to post data to a Domino agent:

    const request = require("request");

    var postConfig = {
       url: "http://acme.com.au/processing.nsf/processdata?openagent",
       method: "POST",
       rejectUnauthorized: false,
       json: true,
       "auth": {
             "user": username,
             "pass": password
       },
       headers: {
           "content-type": "application/text"
       },
       body: encodeURIComponent(JSON.stringify(configObj.postData))
    };

    request(postConfig, function(err, httpResponse, body) {
    // Handle response from the server
    })



    The actual data that you would post to that agent, would look something like this:
    {
    "targetdbpath": "mike/dummycms.nsf",
    "targetview": "cmsresources",
    "targetfieldname": "contentfield",
    "updatedbyfieldname": "lastupdatedby",
    "attachment": false,
    "devmode": true,
    "data": "my URLEncoded data goes here"
    }



    Server Side Java Agent

    So here's how the server-side Java agent interprets the JSON data that's been posted to it:

    import lotus.domino.*;
    import java.io.*;
    import org.json.*;

    public class JavaAgent extends AgentBase {
    public void NotesMain() {
       try {
               Session session = getSession();
               AgentContext agentContext = session.getAgentContext();
               // Your code goes here
               Document currentDocument = agentContext.getDocumentContext();

               pw.println("Content-Type: text/text"); //Content of the Request

               PostedContentDecoder contentDecoder = new PostedContentDecoder(currentDocument);
               String decodedString = contentDecoder.getDecodedRequestContent();


    It's a standard Domino Java agent.  I grab the context document from the agent context.


    PostedContentDecoder is my own Java class, which grabs the actual content data from the request_content field of that document.  This is actually a bit more complicated than it sounds, because of the way different Domino handles data greater than 64kb in size that's posted to it.  If it's less than 64Kb, then Domino presents as single field called "request_content".  If it's more than 64kb, Domino presents a series of request_content fields, called "request_content_001", "request_content_002" and so on, up to how many fields are needed to hold the size of the data.  The PostedContentDecoder class takes care of these differences.  The class also takes care of URL decoding the data that was encoded by the client-side JavaScript call, encodeURIComponent() (see above), via the line below:

    requestContentDecoded = java.net.URLDecoder.decode(requestContent, "UTF-8");


    The final piece of the puzzle, in terms of interpreting the posted data on the server side, is to covert the JSON object string into an actual Java object.  There's no native way of doing this in Java, but the huge advantage of Java over LotusScript server agents - and I did try LotusScript first -  is that Java can easily import any number of 3rd-party .jar files to do their donkey work for them.  There's a number of such .jars that will convert JSON strings to Java objects, and vice versa.  Douglas Crockford's JSON reference page lists over 20 JSON packages for Java.

    I went with Crockford's own org.json library, which you can download from the Maven Repository.  This gives you a new class, called JSONObject, and this what you should use.  Don't try to define your own Java data class and then try to map that to the JSON data somehow.  I tried that at first, and ran into some weird Domino Java errors.

    Here's some code that turns the JSON into a JSONObject.  It then prints the various object member so the Domino server console.
    JSONObject obj = new JSONObject(decodedString);
    JSONObject obj = new JSONObject(decodedString);
    Boolean devMode = false;
    if (obj.has("devmode")) {
       devMode = obj.getBoolean("devmode");
       System.out.println("devMode (variable) = " + devMode);
    }

    if(devMode) {
       System.out.println("targetdbpath=" + obj.getString("targetdbpath"));
       System.out.println("targetview=" + obj.getString("targetview"));
       System.out.println("targetdockey=" + obj.getString("targetdockey"));
       System.out.println("targetfieldname=" + obj.getString("targetfieldname"));
       System.out.println("updatedbyfieldname=" + obj.getString("updatedbyfieldname"));
       System.out.println("effectiveUserName=" + agentContext.getEffectiveUserName());
    }


    Now I have the data, and know where it has to go, it's pretty much standard Notes agent stuff to paste the data there.

    array.prototype.pureSplice npm package

    Michael Brown  June 25 2016 11:22:42 PM
    I've just released my seventh npm package, array.prototype.pureSplice().  FYI, my seven packages now have over 2, 000 downloads per month, combined.  Okay, that may not be in the same league as ReactJS (over 160,000 downloads per month) or AngularJS (over half a million downloads per month), but hey, it's a start!!!

    So, pureSplice() is an array method to return a new array with a specified number of elements removed. Unlike JavaScript's native array.splice() method, array.pureSplice does not modify the source array. This is important if immutability of data is important to you and/or you are using libraries such as Redux.

    Full instructions for use are on the array.prototype.pureSplice page on npmjs.com.  Also, a new feature on the npmjs site: you can now check how pureSplice() works in your browser, via Tonicdev.

    About