Adding Snap.svg to Vue.js Projects Created with vue-cli 3.0

My front end tool chain has finally settled into something I’m getting comfortable with. But these tools don’t stand still, so the learning continues.

One of the higher traffic posts on this blog has been my discussion of how I integrated Snap.svg into my Vue.js and Nuxt.js projects. I wrote that article because while it wasn’t fun, it helped me learn a lot about Webpack configuration.

That article was written at a point in time when where I was maintaining standalone Webpack configuration files for my projects. A big part of the trouble with this approach is that it intrinsically couples a large number of external dependencies together. My Vue.js projects rely on a number of third party libraries, Webpack uses a collection of loaders and the build tool chain itself has a number of dependencies like ESLint and Babel that bring along their own school of remora fish.

So, you can understand why once you’ve got a fairly complicated project and build process configured in Webpack, that the last thing you typically want to do is touch it. It’s even worse when you introduce dependencies like Snap.svg that, because of their lack of module support, require mystical Webpack incantations to get them to work.

I just had to extract this slope intercept calculator from a project that was generated without the CLI, so I was using the older Webpack mysticism to get Snap.svg working. With the need to create this in a new project, startig with the latest Vue.js and the vue-cli was the natural approach… But that meant getting our friend Snap.svg working again. Read on to find out how…

How The vue-cli reduces webpack Configuration hell

Let’s be honest, not many people like working on Webpack configurations.

One of the things the vue-cli sets out to do is to hide away some of the complexity associated with Webpack configurations. This makes building compiled Vue.js projects a bit less intimidating, but it also allows the whole tool chain to be upgraded without breaking what would have been a hand-coded Webpack configuration.

There’s still a Webpack configuration hiding under the hood of course, it’s just that the CLI manages it. And if you upgrade Vue.js and it’s tool chain, in theory it becomes a seamless process because the configuration is “owned” by the CLI and it’s not a visible part of your project that you’ve been tempted to monkey around with internally.

This management of the Webpack configuration by vue-cli is in contrast to many other tools that “eject” a configuration when the project is first setup, or when you might need to make an unsupported modification. At that point, the Webpack configuration file is owned by the developer (lucky you!) and a future upgrade becomes something to potentially be feared.

Fortunately, vue-cli 3.0 introduced a new way to make modifications to the generated Webpack configuration without having to eject a configuration and be responsible for maintaining it forever forward.

Webpack chain: how to modify a vue-cli webpack configuration without ejecting

The magic comes from webpack-chain, a package that came out of the Neutrino.js project. It’s bundled into the vue-cli and the implementation details are a slightly different, but the documentation at the link above is useful for digging into individual options. There’s some examples specific to vue-cli at this link although they fell short of the complete reference at the actual webpack-chain documentation.

Webpack-chain allows you to create little configuration patches that get applied programmatically to the Webpack configuration that is built by a vue-cli project each time you compile. The advantage of this is that you only need to specify the eccentric bits of your project’s build configuration, and if you subsequently upgrade to Webpack 7 or Vue.js 5 all of the stock configuration provided by the rest of the toolchain should hopefully get upgraded appropriately.

An example of this is getting CSS preprocessors like Sass or Less running. Normally this would take some Webpack gymnastics, but the vue-cli does this for you when you configure a project and you never have to touch the actual Webpack configuration. As vue-cli gets updated to use later versions of Webpack or whatever CSS preprocessor you’re using, these upgrades will come along for free in your project without having to go back and touch the Webpack configuration. Woohoo!

Where you do need to go outside the box, you can do it by setting up a “chain” in separate configuration file. You may still have update your custom configuration bits in your chain on undertaking a major version upgrade, but those changes are scoped and isolated, and they’re definitely more succinct. I much prefer this compared to picking through 1000 lines of Webpack boilerplate and finding where I’ve done something outside the box.

The slightly inconvenient part of this is that the syntax of webpack-chain is slightly different from the JSON object structure you may be more familiar with if you’re a Webpack old timer.

As an example, let’s look at our friend Snap.svg.

Getting Snap.svg Working with VUE-CLI

As in manual configurations, we’ll use the imports-loader to load Snap.svg in our project. So once we’ve created a new project with vue-cli, we’ll need to install Snap.svg and imports-loader as appropriate dependencies…

npm install --save snapsvg
npm install --savedev imports-loader

We don’t need to explicitly install webpack-chain as part of our project as vue-cli already bundles this into itself.

To create a chain that gets applied to the vue-cli generated Webpack configuration, you need to create a file named vue.config.js in your project’s root folder. Here’s the entire file I’m using to pull Snap.svg into my project…

module.exports = {
  chainWebpack: config => {
    config.module
      .rule("Snap")
        .test(require.resolve("snapsvg/dist/snap.svg.js"))
        .use("imports-loader")
        .loader("imports-loader?this=>window,fix=>module.exports=0");

    config.resolve.alias.set("snapsvg", "snapsvg/dist/snap.svg.js");
  }
};

That will handle getting Snap.svg resolved and imported when we need to. Again, the webpack-chain documentation illuminate how this is mapped back to traditional JSON Webpack configuration structures, but you can see readily how you might setup other behaviors like plugins or other loaders.

All that’s left is to reference it in the source to one of our components, something in very broad strokes like this…

<template>
  <div >
    <svg id="mySVG"  />
  </div>
</template>
<script>

import Snap from "snapsvg";  // <-- triggers our import resolution

export default {

  data() {
    return { mySVG: undefined }
    };
  },

  mounted() { 
    this.mySVG = window.Snap("#mySVG");
  },

  methods: { 
     // draw amazing stuff here with this.mySVG
  }
}

</script>

Troubleshooting webpack-chain

The vue-cli includes commands to dump a copy of the Webpack configuration with your webpack-chain modifications applied to it.

vue inspect > ejected.js

You can open this file to look and see how the chain was applied to generate corresponding Webpack rules. In our case, this is what we got for the vue.config.js file described above…

  resolve: {
    alias: {
      '@': '/Volumes/Hactar/Users/jscheller/vue/myproject/src',
      vue$: 'vue/dist/vue.runtime.esm.js',
      snapsvg: 'snapsvg/dist/snap.svg.js'
    },
   
    [ ... and about 1000 lines later... ]
 
      /* config.module.rule('Snap') */
      {
        test: '/Volumes/Hactar/Users/jscheller/vue/myproject/node_modules/snapsvg/dist/snap.svg.js',
        use: [
          /* config.module.rule('Snap').use('imports-loader') */
          {
            loader: 'imports-loader?this=>window,fix=>module.exports=0'
          }
        ]
      }

A great thing about running the ‘vue inspect’ command is that it will give you console output if your chain rules have a problem. If you build up a webpack-chain in your vue.config.js file that doesn’t seem to be working right, this is probably one of the first things you should check to see if it’s actually getting applied correctly.

Save Money by Transferring Domain Registrations from Bluehost to Cloudflare

If you’re a web developer you’ve probably had way too many ideas for way too many amazing projects, and you’ve promptly gone out and registered domain names for each one of them. That feels like progress, right? But before long, you’ve got a domain registrar account with 100 random domains that are auto-renewing for $16 every year.

Obviously the best way to save money is to let some of those domains expire, and I’ve gotten a bit more realistic about the ones I’ve held onto. I still have quite a collection of family names and actual honest-to-goodness projects that made it beyond the “what-a-great-idea-go-register-it!” stage. So my annual registrar bill does look scary when I add it all up around income tax time.

Cloudflare came down the chimney with an early Christmas present, however. And it’s a big one.

If you’re not familiar with Cloudflare, they create a layer between your actual web hosting and the visitors that show up to view your content. That layer acts as both a CDN and a security service, so Cloudflare accelerates your website site performance and protects it from (among other things) DDoS attacks. They do this by taking over the DNS for your domain, directing requests through their servers first to cache and deliver your content. You still need to host your site somewhere, and you still need to register your domain somewhere, but by putting Cloudflare between visitors and your site, it’s like having a bouncer at the door that helps move everybody in and out of the business faster and keeps the riff-raff outside in the snow.

The basic level of Cloudflare services are both amazing and free. Not free as in “free trial” or free as in “you’ll need to upgrade to do anything useful” but free as in “how are they doing this for nothing?!” free. I have been using Cloudflare for years on DadsWorksheets.com which serves a significant amount of traffic, and not only has the security functionality been a win but Cloudflare has definitely saved me from needing to host on a larger server (sorry, Linode!)

Anyway, back to Cloudflare’s Christmas delivery. Cloudflare announced that they were rolling out domain registrar services, and even better that they would be providing close to wholesale rates to Cloudflare customers. What this means for us compulsive domain name collectors is that you can get rates closer to $8 instead of paying $16+ for our precious domain gems each year. Plus, since I’m using Cloudflare for most of my live sites anyway, it’s one less place to deal with accounts and record changes and those bizarre who-is-serving-what questions that come up when you’ve got 37 domains online.

You can read more about Cloudflare’s registrar announcement here…

https://blog.cloudflare.com/using-cloudflare-registrar/

I’ve been registering domains at Bluehost, which I know isn’t ideal from either a management point of view or pricing. I started there back when their registration rates were a lot more reasonable, but it’s gradually become a profit center with add ons like “registration privacy” and other services. If only some of that money had gone into updating their web UI for managing domains, the last ten years of escalating registration fees might have felt less like a mugging.

Navigating the interface to get your domains moved from Bluehost (or perhaps similar other commodity hosts like GoDaddy or 1&1) to Cloudflare can be a little unruly, so this post will quickly walk you through the steps. I’ve moved roughly a dozen domain registrations to Cloudflare now and it’s fairly painless, but you do have to go back and forth between the sites a bit to actually complete all the steps, and even Cloudflare’s service (which is still a bit new) can be less than clear in a few spots.  Hopefully this post will save you some trouble if you’re making the same changes.

Note that even if you’re not hosting your actual website at the site where you register your domains, you can still move your registrar and DNS services to Cloudflare. Because your domain registrar and DNS and the actual web site hosting are different services, it’s relatively transparent which you move things around.

That said, if you are doing some non-standard DNS things, or you frankly have no idea what DNS does or how it works, this might be an opportunity to learn or ask that techie buddy of yours to glance over your site before you jump in.

But if you’ve got your Batman cowl on, the basic steps we’ll go over look like this…

Steps to Migrate Domain Registration from Bluehost to Cloudflare

  1. Create a Cloudflare account if you don’t have one.
  2. Activate your site on Cloudflare to start serving DNS records from Cloudflare instead of your current host.
  3. Enter Cloudflare’s DNS server names at your current registrar to point your domain to Cloudflare.
  4. Make Sure Your Site is Working!
  5. Start the registrar transfer process, including entering the domain’s transfer authorization code.
  6. Go back to your registrar and confirm that the transfer to Cloudflare is okay. This is the step that’s easy to overlook.
  7. Go back to Cloudflare and verify it’s now the registrar.

Let’s go over these now in a bit more detail.

Step 1: Create a Cloudflare Account

If you don’t yet have a Cloudflare account, go to their home page and click the “Sign Up” button and go through the usual email verification steps.

Step 2: Activate Your Site

Once you log into your Cloudflare account, you need to add your site to Cloudflare by clicking the “Add site” link near the upper right part of the page.

Add a Site to Cloudflare

This turns on all the Cloudflare wonderfulness for your domain, but doesn’t get you yet to the actual registrar part of the process (we’ll cover that in a second).

Again, the free plan is probably all you need. You can upgrade to plans that offer a few more performance enhancements and security, but if you’re coming from bare-metal hosting somewhere else, the free plan is already a huge infrastructure upgrade.

As you go through the steps to activate your site, Cloudflare will scan your current DNS records and make a copy of them to serve in place of your current DNS provider. It will get everything ready to redirect your site through Cloudflare’s servers so that the CDN and security features will be activate once you change the DNS server names in the next step, but nothing up to this point has changed the way your site is currently being served.

Step 2: Update your DNS servers

Your site is still being served exactly the same way it was before because it’s still going through whatever DNS services are in place. To get Cloudflare to recognize that your site is “active” you need to change the DNS name servers. These are entries at your current registrar that tell the whole of the internet how to resolve names to IP address for your domain, and by changing these server names Cloudflare is able to tell that you actually own the registration for the site.

At the end of the site setup process in Cloudflare, you’ll get a page that shows you the new DNS server names. You can also find these at any time by looking at the “DNS” tab when you’re managing any particular site in Cloudflare’s dashboard.

The place you enter these new DNS server names is on the “Name Servers” tab inside Bluehost. It’d be useful to keep track of the existing values you have there just in case you want to roll things back.

The page at the end of the Cloudflare setup process looks like the window on the left in this screen shot, and you’ll want to copy those server names over to Bluehost (or your current registrar) in the indicated places in the window on the right (Click to expand the image if you need to see more detail)…

Cloudflare DNS Change

When you click the green button to save the changes in Bluehost, it’ll take some time for the name server records to get distributed to all the servers that cache domain DNS information around the internet, but within a few minutes you should be able to log back into Cloudflare and the domain should show “Active” on the main dashboard page when you log in…

Cloudflare Active Domain

Step 4: Verify Your Site is Working

Verify your site is working as expected, including any other services like emails or APIs. Again, Cloudflare should have copied your existing DNS records and updated them appropriately, but this is your chance to figure it out.

If something seems out of whack, put the old DNS name servers back in, wait for propagation and then dig deeper to see what’s going on. Putting the DNS server names back as they were will restore everything exactly to as it was before (Cloudflare is completely bypassed) so this can function as a clear on/off switch if anything is misbehaving.

But if all goes well, at this point, Cloudflare is providing its CDN and security services for your site and we’re ready to move to updating the domain registrar.

Step 5: Start the Registrar Transfer

When you log into your Cloudflare dashboard, you’ll see a list of your sites and (as of December 2018) a purple box that invites you to transfer domains. On my account it looks like this…

Cloudflare Domain Registrar

…but you may see a similar message inviting you to “claim your place in line” for an invitation. Cloudflare is still rolling this service out in scheduled waves so they don’t get overwhelmed, but as of this writing you should only be waiting a week or two to get through the queue, and hopefully it will be wide open shortly. If you are in the queue, you’ll need to log back in to Cloudflare periodically to see if the registrar functionality has opened up to you or not… They don’t seem to send an email or any kind of notification once you’ve reached the front of the queue.

Assuming you can register the domains, Cloudflare will let you select which active domains you want to transfer from Bluehost to Cloudflare. It will default to a list of all active Cloudflare domains.

Cloudflare will need a credit card and billing details during this process. It will charge you for one year’s domain registration as part of the transfer process. This is typical when transferring domains between registrars.

But before we get too far… That Bluehost registrar user experience that I’m sure you love as much as I do is going to need some attention from you. If you log back into your Bluehost account and go to the ‘Domains’ area of the site, you need to select your domain and make sure it is “Unlocked”. This is the page you’re looking at…

Bluehost Locked Domain

Note that “Lock Status” entry in the middle. If it shows “Locked” it will prevent a transfer from happening, and you should click “Edit” there to go to the lock panel and then choose to unlock the domain. I’ve had mixed results with the correct lock status showing there right away and it you have trouble it may be worth logging entirely out of the Bluehost interface and logging in a minute or two later to verify the change has taken effect. It’s not clear to me whether attempting to unlock the domain repeatedly when the status is showing “Locked” is toggling the state or consistently unlocking it, but it’s definitely a little flaky for me.

The next important thing you need out of Bluehost is the domain’s transfer EPP code. This is a sort of password that registrars use to make sure that a transfer request has been authorized by the actual owner of the site. It’s a random string the registrar generates, not something you’ve provided, and in Bluehost’s domain UI you’ll find it here…

Bluehost EPP code

When you go through Cloudflare’s transfer process, it will ask you for this code…

Cloudflare EPP authorization code for domain registrar

Note that I’ve shown the actual auth code in my screen shots here, but you should NEVER share this code publicly from your live registrar. Since I’ve already completed the domain transfer from Bluehost to Cloudflare, the EPP code here is essentially dead, but if you had a live site, that code would potentially let someone request a transfer of your domain. Whoops.

You’d think, of course, that you’d be done at this point, but going back to your Cloudflare dashboard and clicking the “Domain Registration” link you’ll see something like this…

Cloudflare Domain Invalid Auth Code

…and that leads us to an easy to overlook step.

Step 6: Confirm the Registrar Transfer

The error message in Cloudflare suggests that you didn’t copy the right EPP authorization code in, but in reality it’s simply complaining that the transfer was rejected by Bluehost.

In reality, the auth code is probably fine (you just copied and pasted it, probably) and this message in Cloudflare’s system should probably say something more like “Transfer rejected by original registrar. Please verify authorization code or confirm the transfer at the old registrar’s interface.”

Because, that’s what you need to do here if you find yourself wandering back into Bluehost’s interface to make sure you copied the right thing out…

Bluehost confirm epp domain transfer

If you go back to the Bluehost transfer EPP tab, you’ll see something a bit different now. In addition to showing you the EPP code again, Bluehost is making darn certain that you want to stop paying those over-priced registrar fees, so it’s blocking the transfer to Cloudflare until you click that blue link. So click away.

Step 7: Verify Cloudflare is Your Registrar

A moment or two later, Cloudflare should recognize that it’s the official registrar for your domain. You can verify this by clicking the “Domain Registration” link at the top of your Cloudflare dashboard, or if you click the domain itself from the dashboard home page, there’s a section in the right rail that shows your registration details…

Cloudflare Dashboard Domain Registration

 

Thoughts and Insights

I’m not affiliated with Cloudflare, but I’m a huge fan of their service. Their new registrar is going to be game changing, and it’s incredible how much money I’ve spent over the years as traditional registrars have raised fees.

Cloudflare’s registar service is still new and has a few hiccups. I don’t see an obvious way to register an entirely new domain name yet, so it seems you can only transfer existing registrations. Maybe that’s a ploy to encourage people to take advantage of those introductory loss-leader registrations many other services offer. And it looks like right now transfers out of Cloudflare require a visit with customer support, so it’s probably not a great place for active domain traders.

But if this service is like everything else they do, Cloudflare’s registrar going to get a lot better a lot sooner and I’m pleased to be coming to one spot now for my CDN, DNS, security and domain registration. And I’m frankly looking forward to sending these guys more money once my sites get large enough to justify some of their higher performance add-ons.

Meanwhile, thanks Cloudflare, for making another corner of the internet a little nicer.

 

 

How DNS Mistakes Can Score You a Google Manual Penalty

When your livelihood is tied to website traffic, one of the worst things you can wake up to is an email from Google Search Console.

I’m no stranger to bad news from the Big G and the non-communication and horror movie circular forward-you-to-the-right-group conversations that go along with any dialog you might think would rectify the problem. I spent six months trying to get my site off the AdExchange blacklist because of a minor AdWords violation on an account I didn’t even know was still serving $10 per month worth of ads. Which sounds insane, I know, because my ancient AdWords account should have nothing at all to do with my display partner’s Ad Exchange account, but believe me… What a mess. In that case, I was fortunate that a friend-of-a-friend-of-a-friend had a direct Google rep with a back-door into some super-secret re-evaluation queue not visible to mere mortals.

But my real talent seems to be using DNS records to try to shoot myself in the hard drive. A couple of years back I managed to take my math worksheets site offline with a DNS record change that I thought was working fine because of a forgotten localhost entry that resolved to the right address. When I saw a huge traffic drop in Google Analytics the next day, I immediately knew I’d messed up, but that brief span of time offline wiped out almost six months worth of SEO ranking progress. I’m a huge fan of Uptime Robot now.

Not How You Want to Start your SEO Day

Subdomain DNS Records are Dangerous

So you can bet, I’ve become pretty dang careful with DNS records pointed to my primary site. But, I’m also a developer.

There’s that old adage, “Real Developers Test It In Production,” something you should not ascribe to, so naturally I sandbox development and staging servers on subdomains. And of course, a subdomain means a DNS A/AAAA record that needs your full attention. And that, friends, is the beginning of the reason why I got another Google Search Console email a couple of days ago and why I’m doing something different with my dev servers going forward.

The obvious scary thing in the email’s subject line was the words “Hacked Content” and then, if that wasn’t enough to make every hair stand on end, the body text shouted “manual penalty” with a handy link right to the page on Google Search Console, which provided a big fat confirmation of every bit of bad news. Great.

After I calmed down a bit, I settled in to see what was going on. Google helpfully provided links to some of the pages that it claimed were hacked, and none of the URLs looked right at all. None were coming from the main www subdomain, which immediately lowered my heart rate, but even the URLs to the development subdomain they all referenced looked really weird.

And then, it downed on me, that development subdomain wasn’t even around any more. I had decommissioned the server it was running under months ago, so that content couldn’t even be coming from a machine I was using. That server was gone, but its IP address was still resolving. And when I’d surrendered that IP address back to Linode, it meant that basically anybody else could start using a new server with that IP for their own purposes. So when someone else spun up a new site, it became reachable via a subdomain I still had defined. DNS induced brain damage, part two it seemed.

So in this case, there wasn’t any “hacked content” anywhere, it was just that my DNS made it look as though I was serving duplicate content from some random site out from under a subdomain I controlled. And while the manual penalty suggested it was only relevant to URL patterns that matched that subdomain, it was also pretty specific that the manual penalty affected the overall reputation and authority of everything under the domain, so fixing it right away was a priority.

The obvious and easy solution was just to delete the DNS record pointing to that subdomain, wait for propagation and then file a reconsideration request through Search Console. Even though the reconsideration request indicated that Google took “several weeks” to review anything, I did thankfully get a follow up email in roughly 36 hours that said the penalty had been removed.

I’m not sure if I took a short term traffic hit or not on the main domain as all of these events transpired over a weekend, and the traffic pattern for this site drops off significantly there and around the holidays in general. Otherwise the site traffic looks normal in spite of the brief stint with the manual penalty in place. So far, it looks like I dodged whatever bullet might have been headed my way. I think an important contributor to this rapid turnaround and preserving rankings was that I fixed the issue rapidly and explained in clear detail in the reconsideration request what happened and how I resolved it, and specifically that it wasn’t actually an actual malicious hack.

But the key take away is not just to be super careful managing your DNS entries, but also to run any publicly visible development and test boxes under a domain that has nothing to do with your main property.

If this had been an actual hack of a machine we were using for something critical, and maybe one that appeared more malicious than serving duplicate content, that manual penalty could have had a real negative financial consequence for the main site. It’s hard enough to secure a production server, but a development machine that is transient in nature is probably going to be less secure, and potentially a softer attack vector.

SEO is hard work, and shooting yourself in the DNS is pretty easy. If a hack, or even just a DNS misconfiguration, of a dev machine can lead to a manual penalty that affects not just the subdomain, but your entire web property, it’s much wiser to have it far away from your main domain. In the future, I’ll be running any publicly visible dev machines under an entirely different domain name for this reason.

Eclectic Observations from Arriving Late to the LinkedIn Party

After having been bitten quite hard by Google’s August algorithm update, I’ve been on a mission to establish a bit more EAT related to my online presence in hopes of a recovery. If you’re wondering what the heck I’m talking about, EAT is a recent bit of buzz phraseology that has those of us with an interest in SEO pounding our forks at the dinner (i.e., revenue) table and hollering about the latest batch of secret ingredients in Google’s ranking algorithm sauce.

EAT in SEO parlance stands for “Expertise, Authority and Trust” and from all impressions, this seems to be a subjective measurement of a site’s credibility assigned by a human quality ranker at the Big G. And fundamentally this comes down to identifying people associated with sites, and establishing that those sites are built and run by bonafide credentialed humans and not Russian robots or other nefarious automatons. Google has a document that gives some vague hand-wavy instructions for its human raters to follow to find out more about a site’s pedigree, typically by looking off-site for items on the EAT menu.

I’ve been studying this menu for a while now, but one item off the appetizer list that I completely missed was setting up a personal profile on LinkedIn and getting a company page listed for DadsWorksheets.

So let me be candid here. I’m a terrific introvert. Where lately people run around denouncing the looming perils of social media addiction, I’m one of those dungeon dwellers whose arm need be twisted nigh off before I’ll log into my FaceBook page. And, yes, if you’re one of the hundred-odd people who’ve sent me a LinkedIn invitation in the last few years, I hope you don’t feel scorned that I didn’t join you and I’ll ask your forgiveness now… It’s just that I never actually setup an account until today.

But after arriving catastrophically late to the party and reaching out to a dozen connections who might take some pity on my apparently self-induced social media ostracism, I did have a few observations coming in the door:

  • Wow, most of you old friends look quite professional in your profile pictures. I find myself wondering if I should shed my sunglasses, or if there’s some value in maintaining profile picture continuity across StackOverflow, GitHub, Discord and all the other tech-oriented services I actually do lurk through regularly.
  • Indeed, your profile pictures match some envious résumés and work history.  And interestingly, some glaring omissions. I’m looking at you, dear Veebo alumi, and wondering about airing those battle scars publicly as well.
  • Even more nostalgia inducing than the prospect of updating my own dusty CV is seeing where so many of you have travelled since we parted company. Being in this soloprenuer consulting thing for so long, it’s easy to forget how many interesting places with great people you’ve worked with. It’s good to see you all again.

So maybe this social media thing isn’t all the cat videos and political noise it’s seemed to be, and I just needed to find the right place. We’ll see. For now, the hour or two on LinkedIn today was actually kind of fun.

Adding Snap.svg to Vue.js and Nuxt.js Projects

This post may be out of date for what you need… There’s an updated article that deals with adding Snap.svg to projects created with the vue-cli 3.0 and more recent 2.x versions of Vue.js. Click here to read it!

SVG is amazing, and if you’re building any custom vector graphics from your client code, one of the easiest libraries to use is Snap.svg. I’ve used it in a number of projects, including vanilla JavaScript and various transpiling setups including Transcrypt.

I’m trying to go a little more mainstream after wasted years of time on fringe technologies that fell out of favor.

I’m spending a lot of time these days learning Vue.js and really hoping this is going to be a worthwhile long term investment in my skillset. So it was only a matter of time before I found myself needing to get Snap.svg working in my Vue.js projects, which meant some extra fiddling with WebPack.

Getting Snap.svg Working with Vue.js

Out of the gate, there’s some hurdles because Snap mounts itself on the browser’s window object, so if you’re trying to load Snap through WebPack (as opposed to just including it in a project using a conventional script tag), you need to do some gymnastics to get WebPack’s JavaScript loader to feed the window object into Snap’s initialization logic. You can find an overview of the problem in this GitHub issue which illustrates the obstacles in the context of using React, but the issues as they relate to Vue.js are the same.

I’m assuming you have a Vue.js webpack project that you started with vue-cli or from a template that has everything basically running okay, so you’ve already got Node and webpack and all your other infrastructure in place.

For starters, you’ll want to install Snap.svg and add it to your project dependencies, so from a terminal window open and sitting in the directory where your project’s package.json/package-lock.json sit…

npm install --save snapsvg

That will download and install a copy of the Snap.svg source into your node_modules directory and you’ll have it available for WebPack to grab.

Normally you’d be able to use a package installed like this by using an import statement somewhere, and you’d think you could do this in your Vue project’s main.js file, if you start down this path you’ll get the window undefined issue described in that GitHub link above.

The tricky bit though is getting WebPack to load the Snap properly, and to do that we’ll need a WebPack plugin that lets us load as a JavaScript dependency and pass some bindings to it. So, in that same directory install the WebPack imports-loader plugin…

npm install --savedev imports-loader

To tell the imports-loader when it needs to do its magic, we have to add it to the WebPack configuration. I changed my webpack.base.conf.js file to include the following inside the array of rules inside the module object…

 module: {
   rules: [
      ...
       {
       test: require.resolve('snapsvg'),
       use: 'imports-loader?this=>window,fix=>module.exports=0',
       },
      ...
     ]
   },

Now we can load Snap.svg in our JavaScript, but imports-loader uses the node require syntax to load the file. So in our main.js, we can attach Snap.svg by telling WebPack to invoke the exports loader like this…

const snap = require(`imports-loader?this=>window,fix=>module.exports=0!snapsvg/dist/snap.svg.js`);

…and then attach it to our root Vue instance, still in main.js, something like this…

const vueInstance = new Vue( {
 el: '#app',
 snap,
 router,
 axios,
 store,
 template: '<App/>',
 components: { App }
} );

export { vueInstance };

There is some redundancy in that require() call and the way we setup the module resolution in the WebPack configuration. I’m fuzzy about why I seemed to need this in both spots, but it works so I’m running with it. If you have insights they’d be appreciated; let me know in the comments.

Getting Snap.svg Working with nuxt.js

Nuxt requires a slightly different twist, because as you’re aware a typical Nuxt project doesn’t have either a main.js file or a native copy of the WebPack configuration. We need to make the same changes, but just in a slightly different spots.

You need to install both snapsvg and imports-loader just like we did above…

npm install --save snapsvg
npm install --savedev imports-loader

The way we modify the WebPack configuration in a Nuxt project is to create a function that accepts and extends the WebPack configuration from with your nuxt.config.js file…

/*
 ** Build configuration
 */

 build: {
   extend(config, ctx) {
      config.module.rules.push( {
        test: require.resolve('snapsvg'),
        use: 'imports-loader?this=>window,fix=>module.exports=0',
       } );
     }
   }

Since we don’t have a main.js, we need to use a Vue.js plugin to inject shared objects and code into Vue. In your projects plugins folder, create a file named snap.js that contains code to attach a snap object created again using imports-loader…

export default ({ app }, inject) => {
  app.snap = require(`imports-loader?this=>window,fix=>module.exports=0!snapsvg/dist/snap.svg.js`);
}

…and back in your nuxt.config.js file, include this plugin…

plugins: [
   ...
   {src: '~/plugins/snap'},
   ...
],

These approaches seem to work well for me in both a standard Vue.js and Nuxt.js projects, but both of these setups have been cobbled together from reading a lot of other bits and pieces… If you’ve got a better or approach or see a way to clean up what I’ve done, please let me know.

Meanwhile, good luck with your Snap and Vue projects!

 

 

 

 

Migrating from Dasher.tv to the AnyBoard App

For a little over a year, I’ve been running a fantastic NOC-style dashboard on the AppleTV in my office courtesy of a nifty app called Dasher. It took a little Python gymnastics, but I was able to pull data from Google Analytics, Ahrefs and Staq to assemble a consolidated view of what’s happening at DadsWorksheets.com, all of which helps keep my eye on getting things done there.

Much of the work in this is a Python script that runs locally to collect the data. I’ve been pushing that data up to Dasher’s servers, which then gets fed back to the Dasher app on the AppleTV. But I’ve been concerned for quite some time because this app never got the love or attention it deserved, I’m sure in large part because it required chattering through a web API to push the data. So as well as this worked, it was never really going to be broadly adopted by anyone but us propeller heads.

That means I knew the hammer was going to fall on this little gem at some point, and sure enough I got the email yesterday that Dasher was shutting down in May.

I rely on this dashboard enough that Dasher’s demise caused me to peel off for part of yesterday to find a replacement. I didn’t want to spend the next few weeks with some thought gnawing the back of my head, so I at least needed a plan.

And there are several good alternative dashboard apps out there, many of which with integrations to Analytics plus 100 other services I didn’t need. These are all great solutions, but they also all came with $10 per month fees, and missing integrations to oddball places like Staq and some of the other custom bits that I’d still have to jump through hoops to get fed anyway.

I’m already collecting all the data and generating a few simple charts in Pillow, so sending it somewhere that would ultimately show up on the AppleTV shouldn’t be hard. If there was even a simple version of Safari or another browser I could load on the AppleTV to bring up and auto-refresh a web page, I’d have a solution by kicking out some HTML or a even a full blown Vue.js app, but short of renewing my Apple developer account, reinstalling Xcode and side-loading tvOSBroswer, there isn’t much on the map.

That’s why I’m so glad I found AnyBoard. This is a great little app that does everything and more that Dasher did, but without putting a third party server in the middle.

When you setup Anyboard, you point it at a JSON file that you’ve made visible somewhere. That JSON file describes how one or more dashboards are laid out and also where to get the data. The data comes from other JSON files you identify using URLs in the configuration. By refreshing those JSON files with new data, the Anyboard app will have access to live data feeds from whatever sources you can cobble together. There’s also a pre-built setup for Nagios, but I didn’t play with it here.

Because all of the dashboard data is moving between the Apple TV and the local network, you can configure Anyboard to hit URLs on a local server, so your dashboard configuration and your actual data can stay inside the building. Also, you’re not dependent on a third party developer necessarily pulling the plug on the API that feeds the app. So I’m anticipating a very long relationship with Anyboard here.

Not that I think there’s anything to worry about. I traded a few emails with Ladislav at sféra, the Anyboard developer, and he was eager to help work through some odd things I was doing in my configuration and answer a few questions I had. These are the kinds of guys worth our support.

I was able to port my Dasher configuration over to Anyboard in about half a day, and the resulting dashboards look better than they ever have. Anyboard is free, there’s no premium version (which I would have gladly bought) or subscription fees (which I would have ruled out). It’s a solid app that does an important job and does it well. I can see a few minor areas that I hope Ladislav polishes up in future builds, but if you’re comfortable cranking out a few simple JSON files, I can definitely recommend Anyboard as an AppleTV dashboard solution without hesitation.

You can find out more about Anyboard at https://anyboard.io/.

Does Mark Cuban Want You to Die In Poverty?

A friend of mine asked me for the thoughts on this article and video…

What Mark Cuban Says will be the #1 Job Skill in 10 Years

The TL/DR is “creative thinking” therefore pursue a liberal arts degree, in lieu of other applied fields such as, pointedly, software engineering.

Which is probably suicide.

I’ve never quite understood the hero-worship over Mark Cuban. I get that he’s successful and made a lot of money in the tech bubble, but I think his key bit of acumen was getting diversified before the crash. After that, what? Basketball teams? Shark Tank? Okay. He probably doesn’t think much of me either, so whatever.

But, no, a liberal arts degree isn’t going to be any more valuable in 10 years than it is today. There’s nothing wrong with these skills for their own merits, but society and the economy is already telling us their value in an employment related context. And that value is not positively correlated in any respect to what college tuition costs.

Yes, in the coming years, we’ll have more data being produced, and more information being thrown at us. Just like if we compared today to ten years ago. But if anything, those societal changes have made knowing how to understand data, manipulate data, generate data even more of a valuable skill… The demand for software professionals (which is simply people who work with data) is vastly outstripping supply and will continue to do so for decades. The notion that because we have more data means we’ll need fewer data-literate professionals is, even on its surface, pure idiocy.

Meanwhile the job opportunities for liberal arts education majors seems often to come from service industry positions that have nothing to do with their degrees. These are exactly the places where automation is going to displace employment. And by, “displace” I mean totally erase. We are on the verge of possibly the biggest shift in employment demand since the invention of the steam engine, and hundreds of millions of people are going be underemployed due to technological innovation. If a graduate’s primary job skill is analyzing French literature, and they spent $100,000 and four years to get there, I’m going to go out on a limb and say they’re hosed.

There’s some sort of mythology around liberal arts degrees being more creative than applied fields. I don’t know where this thinking originated, but I’ll wager it didn’t come from anybody actually working on problems in any applied field. Problems in business and applied sciences not only require creative, critical thinking… They often have enormous consequences when creative solutions can’t be found on time and on budget.

Don’t believe me? Because, you know, Mark Cuban? Basketball? Maybe read these articles instead of listening to Mark…

Only 2% of employers are actively recruiting liberal arts degree holders. Compare that to the 27% that are recruiting engineering and computer information systems majors and 18% that are recruiting business majors.

It’s unclear whether liberal arts graduates are pursuing social service jobs because they’re more drawn to them, because they’re suited to a wider breadth of possible fields (which also contributes to a slow start salary-wise) or because that’s simply what’s left after all the other jobs are taken.

If you’re going to college, get a degree in building something. Business, “hard” science or engineering. These are problem solving degrees that require not just creative thinking, but creative problem solving. Those are the skills employers need.

Or, get a degree in, essentially, debt management. Because that’s probably the primary differentiable skill you’re going to acquire with an advanced liberal arts degree.

How Eclipse Killed GWT

I’ve had a love and hate relationship with Google Web Toolkit for a couple of years now. I built a modest size project using GWT back in 2010, which was definitely during a time of some GWT 2.x related growing pains, suffered through the uncertainty of Google abandoning GWT in favor of Dart, then watched the whole project get handed off as open source to for a long descent into obscurity.

I’m contemplating a larger project again, one with a significant web front-end. And GWT 3.0 is on the horizon, right? I went back to dear GWT to build a few small widgets, trying to see if maybe things were better now. Six widgets down this path, I’m throwing in the towel. And here’s why.

Eclipse.

I mean, don’t get me wrong. GWT seems to have the smell of death all over it on it’s own. As far as I can tell there still isn’t any kind of proper date data type, even an emulation of the Java standard library classes. Google, the biological parent, has essentially kicked the kid to the curb. Looking at GWT questions on StackOverflow is like staring into room full of starving lonely people. If I had to get a question answered there or even in the hoary old googlegroups thread, I’d probably need antidepressants.

But GWT still worked, did basically what was advertised, and it was familiar.

Except. Eclipse.

I’ve got a long relationship with Eclipse as well, going way back to when it was an IBM product called “VisualAge for Java” that (I think) was originally written in SmallTalk of all things. I ran a good sized team building desktop front-end applications in Java in the early 2000’s. This one application has forever convinced me that nobody should be writing desktop applications of any complexity in any language running a VM. Simple typing in Eclipse is an exercise in torment. What do you imagine you do, every second of every minute, all day as a developer in an IDE? Typing. If your IDE sucks at that, nothing else matters. I find myself wondering what the people actually working on Eclipse itself think about it, but I suspect this is one of those situations where when you’re so close to the problem you don’t realize how bad it is (Hint: VERY bad). I’m sure somebody loves it. But I’m sure they’ve never actually worked with anything else. Including a manual typewriter, because even that would be less frustrating than writing code in Eclipse’s text editor.

And if Eclipse on its own wasn’t enough to make me want to torture small forest creatures, GWT on Eclipse was driving me really insane. The build process was slow, buggy and complicated. For projects that didn’t need a backend, I could still never get a configuration to launch without Jetty. Occasionally I’d screw up something in my project settings that would prevent me from being able to launch debug configurations, and I’d recourse to creating an entirely new GWT project and bring all the source into that. It was lunacy of the sort that made you beg for makefiles.

I explored other options… I could never get GWT working right in IntelliJ even if the text editor there was a little more sensible, and when it came time for widget #7, the despair was palpable and I realized I was finally, utterly, completely, done.

Widget #7 wound up coming to life with nothing more than a shell prompt, BBEditTranscrypt and the Developer Tools in Google Chrome. And I couldn’t be happier.

Goodbye old friend.