Two Space Or Not Two Space – Day 5

Day 5 of the #100DaysToOffload Series:

I’m old. Let’s just get that out of the way first thing. I can’t remember the kind of typewriter I learned to type on, but the fact that I learned on a typewriter is probably all the information you need. When I learned to type, my teacher was adamant about one particular fact. One space between words, two spaces after a period. Now, it appears Microsoft would beg to differ.

I never really knew what the reason was for the whole “two space rule” when it came to the ends of sentences, so I asked a friend of mine who works in a museum. This is what he told me:

The history of the double space goes back to the days of manual typesetters and how the kerning was set up with newsprint. The extraneous space was necessary so the eye/brain could readily recognize the end of a sentence. (As a researcher who has read WAY too many column inches of tightly-packed 19th century newsprint, I want to thank them for doing that. Insert slow clap)

This continued into the modern era in newsprint – again, as the kerning had not changed. Modern word processor software and fonts typically modify the kerning so you do not have to add the second space.

So, it appears that the whole reason I was adding two spaces to the end of each sentence was because manual typesetters a couple of centuries ago did it to make printed sentences easier. There’s literally no reason to continue this practice on a computer.

A few days ago, Microsoft updated Microsoft Word to mark it as an error if two spaces are put after a period.

Maybe it’s childish of me since I stopped putting two spaces after the period years ago, but the fact that Microsoft is telling me it’s wrong is the best reason I’ve seen in years to start doing it again.

The OS That Wasn’t – Day 4

Day 4 of the #100DaysToOffload series:

Today was the first day of the Red Hat Virtual Summit (#RHSummit), and one of the more interesting sessions I attended was on a product called ROS. ROS is the Robot Operating System, and it’s a collection of software frameworks for robot software development.

So I can hear the question you’re about to ask. How is a collection of software frameworks for robot software development an operating system? The short answer is, it’s not. That’s right, the Robot Operating System is not an operating system.

I know very little about ROS, and what I do know I learned about it today during the session. Still, I thought it was interesting considering yesterday’s entry into the #100DaysToOffload series, Killed By Google. Yesterday I talked a little about a tiny company called Eazel that created an open source product, went bankrupt, and had the product they created thrive. The company that created ROS has a similar story. They were called Willow Garage. They were founded in 2007, and faded into obscurity in 2014.

Despite Willow Garage no longer being around as the entity that created ROS, ROS is still very much in use in the robotics industry today. It can run on a variety of different operating systems (real ones this time), and powers devices from iRobot to NASA. If Willow Garage hadn’t made the choice to open source ROS, the robotics industry could be in a very different place today.

I’m looking forward to learning more about this operating system that’s not an operating system.

Killed By Google – Day 3

Day 3 of the #100DaysToOffload series:

It was recently in the news that the Google axe has once again fallen on one of it’s products. I can’t remember if it was Hangouts, or Google Cloud Print, or some other unfortunate product. The fact of the matter is, Google’s product body count should give anybody pause who is considering using one of their products.

There’s a site called Killed By Google that maintains a list of products that Google has killed over the years. The numbers are staggering, and honestly I couldn’t muster up the energy to try to count them all. Let’s use the scientific term, “lots”.

To me, this illustrates exactly why Open Source is the way to go.

I’m always drawn back to the story of Eazel. It was founded by Andy Hertzfeld in the late 90s, and it lasted less than two years. It made a single product, and most Linux users today probably don’t remember the company or the product it made.

It’s still here though.

The product they made was a file manager called Nautilus. Nautilus was a nice product, and continues to be updated today by the open source community. Since the Eazel days, it’s been incorporated directly into Gnome, and has been renamed GNOME Files. If you’ve used GNOME since 2001, you’ve probably used Nautilus.

This tiny company of less than a hundred people produced a product that has lasted twenty years and continues going strong today, long after the company that made it has fallen from the memory of industry. Compare that to the graveyard of products Google has left in it’s wake. Each time a product gets the axe, the people who use it are left to fend for themselves.

It seems obvious that Open Source is the better way to produce a product.

My First Time With Linux – Day 2

Day 2 of the #100DaysToOffload series:

I’ve told this story before, and odds are I’ll tell it again. I just haven’t told it here.

Back in the mid 90s, I was in college studying Computer Science. I was living in this double wide trailer house some high school friends owned. The three of us were cramming into this small space. They were both studying Electrical Engineering. When we weren’t studying for school, we were trying to pay our way through by working at a local company that supported the computer systems of the campus. I specialized in desktop Windows repair, and they were the only two Macintosh technicians on campus for Apple systems support.

I don’t remember the exact day, or what I was doing, but I was on my computer in my room when I was invited to see “Windows” running on one of their Macintosh computers. This was before emulation was a big thing, so having Windows on a Mac was a pretty novel idea. When I looked though, things weren’t quite right. It looked really close to Windows, but it wasn’t. I finally got them to cave on the details, and it turns out they were running something called MkLinux on the Mac.

I was absolutely fascinated by this thing they had found. When I found out that they downloaded it free of charge, I wanted to find out right away if there was a “PC version” available for me to use.

At the time, all the development work for the Computer Science department was being done on some Digital Unix systems. Our primary system was called Esus and the secondary system was called Fubar. This was long before people had high speed Internet in their homes, so it was a challenge to use our one phone line to dial up the campus modem bank so I could do my homework. Linux solved my problem of having to dial up to the campus modem bank because everything seemed compatible for me. At least the things I needed. It didn’t take me long to find the “PC version” called Red Hat, and I’ve been using Linux one way or another ever since.

100 Days to Offload – Day 1

Day 1 of the #100DaysToOffload series:

Many of the people reading this will come to my blog from Mastodon. I spend most of my “Social Media” time there, so that’s how most people will know me. For those that don’t, I want to give a brief explanation of what’s going on here with this #100DaysToOffload stuff.

A good friend of mine, Kev Quirk recently suggested a “100 Days to Offload” challenge. This is a challenge to those individuals with a blog to post one thing a day for 100 days straight. I have a blog, kind of. I’ve written somewhere in the neighborhood of a post a year for the last couple years. Coming up with something to write about for one hundred days straight certainly would be a challenge for me. So, I took it.

I’m not the only one doing this. Kev himself has decided that he’s going to take part, as have others. Several are hosting their blogs on the write.as platform, which makes it easy for anybody interested to follow along. Anybody that uses the tag #100DaysToOffload will show up on write.as’s reader site. You can add that to your favorite RSS reader, or just hit the web page when you’re curious.

Now, there’s no specific theme to this challenge. I can write about anything I want. I’m probably going to try to focus mostly on tech, specifically open source stuff, since that’s what I know. Just don’t be surprised if I throw in something completely off the cuff because I can’t think of anything “techy” to write about on a given day.

OK, I guess that’s enough of the Mikespaining. Let the games begin!

Why I Have A Blog

My response to Kev Quirk’s blog post, asking people to explain to the world why they have a blog.

Over the years, my blog has been various things. First, it was a place to rant. Then, it became a news aggregator where every day I would just post a series of links that I found interesting that I ran into over the course of the day. Then, it morphed into a, “What’s on your mind” stream of thought writing place. It’s still kind of that kind of place, when I find time to write.

Since I started Fosstodon with Kev, I haven’t really needed to use my blog. I posted a lot to Twitter before Mastodon existed, so I needed somewhere to wax philosophic on anything that needed more than 140 characters. Twitter moved to 280 characters, and then I moved to Mastodon where we have a limit of 500 characters. I find I post to Fosstodon more than anything else. I keep my blog around in case something comes into my mind that I can’t express in a post or two on Fosstodon.

I keep wanting to change the reason I have a blog to something more meaningful, and some have encouraged me in that direction. I’ve had some difficulty motivating myself in that direction. Hopefully in the near future that will change. I wouldn’t recommend holding your breath though.

Migrating from WordPress to write.as

It’s been literal years since I’ve faithfully updated my blog. The old content has been sitting idly on WordPress.com in a free account since I canceled the server I had GoDaddy hosting for me. I planned to self host, but having a pseudo-large family and a job that doesn’t understand boundaries made the idea of hosting infrastructure in my home less and less appealing. So, things basically stagnated.

Lately I’ve been feeling the urge to write again, so I started looking at my options. I still didn’t want to host in my home. I considered WordPress since my posts were already there, but it just seemed like more than I needed or wanted. Most of the posts I’d put up before were basic syntactically. To make a long story short, I settled on write.as.

write.as was an obvious choice for me because of its integration with the Fediverse. My friend Kev and I founded Fosstodon two and a half years ago, and it’s grown to almost 8500 users as of this writing. I interact with some really great people there on a day to day basis, and having a blog connected directly to the Fediverse seemed like a really great option.

The next big challenge was I didn’t want to lose my old content. Not all of it at least. Previously I’d used my blog for some pretty frivolous things, and I didn’t want to pull all of that over, but there were several things I did want to keep. How do I get content from WordPress to write.as?

Kev came through for me on this one, and wrote an article on his blog detailing how to export the content from WordPress to markdown, which is what write.as uses for its formatting. The WordPress posts are exported to individual directories containing a file called index.md and a directory with any images that were embedded into the post. The vast majority of the images I’d used were decorative, and not really relevant to the article, so for the most part I didn’t need them.

So, how to get the index.md files into write.as, and maintain the original dates. Turns out that last part was a little bit of a sticking point. There were cli tools for write.as, but they didn’t let you change the date a post was being made. I couldn’t find a way to do it through the web interface either. I couldn’t find any way to do that other than the through the API.

Now, I went to college for Computer Science, but I haven’t done serious development for a very long while. When I do write something, it’s usually very basic to automate a process at work or simplify something at home. I’ve been learning Python lately, and this seemed like a good opportunity to give myself a homework assignment.

File format

Using the write.as API and Python, I wrote myself a little script that can post the files retrieved using Kev’s method and still retain the original post dates. This is how those files were arranged.

---
title: "Title of the Post"
date: "2020-03-08"
---

And here's the body of the post. This part is written in traditional markdown.

So, the script just had to pull the title, the date, and the body into memory and send it out to write.as in a way that would be understood and placed into my new blog.

The Code

Here’s the code that I came up with. Please keep in mind, I’m a Python beginner. I put off making this post for quite a while planning on cleaning this up so people wouldn’t think I’m a massive idiot after looking at this code, but I’m finally just doing it and hoping anybody who reads this will have mercy on me.

import requests
import json
import time

blogPostBody = ""
msPassword = "thisIsNotMyRealPassword"

msFile  = open("index.md", "r")
msFileAll = msFile.readlines()
msFileLines = len(msFileAll)

print ("Read " + str(msFileLines) + " lines from index.md.")
msTitle = str(msFileAll[1])[8:-2]
msPostYear = str(msFileAll[2])[7:11]
msPostMonth = str(msFileAll[2])[12:14]
msPostDay = str(msFileAll[2])[15:17]
print ("Extracted Information: ")
print ("  Title: \"" + str(msTitle) + "\".")
print ("  Date: \"" + msPostYear + "-" + msPostMonth + "-" + msPostDay + "\"")

currentTime = msPostYear + "-" + msPostMonth + "-" + msPostDay + "T" + time.strftime("%H:%M:%S") + "Z"

for i in range(5, msFileLines):
   blogPostBody += str(msFileAll[i])

r = requests.post("https://write.as/api/auth/login", json={"alias": "mikestone", "pass": msPassword})

if r.status_code is 200:
  print ("Authorization granted")
else:
  print ("Something went sideways with your access.")
  exit()

tokenToSend = "Token " + r.json().get("data").get("access_token")
msHeaders = {"Content-Type": "application/json","Authorization": tokenToSend}
msBody = {"created": currentTime, "body": blogPostBody, "title": msTitle}

t = requests.post("https://write.as/api/collections/mikestone/posts", headers=msHeaders, json=msBody)

if t.status_code is 201:
  print ("Your post was made successfully.")
else:
  print ("Yea, that didn't work. You need to figure out why.")

d = requests.delete("https://write.as/api/auth/me", headers={"Content-Type": "application/json","Authorization": tokenToSend})


I know it’s not pretty, but it works. After it was done, I went through my old posts and used my script to post them to the new blog. This entry is my first official post on this blog.

As I said before, I’ve been wanting to write again, and now that I’ve migrated over to write.as, things should be easier to maintain. Hopefully I can get more content pushed out to the blog on a more regular basis.

Getting Powerline to Work

I’ve been testing out Ubuntu 19.04 lately, but the shell has been feeling a little naked. In the past I’ve really liked the way powerline updates the looks of things so I thought that I’d install that to improve the asthetic. I had a few problems finding instructions that work in the newest version of Ubuntu, so I thought I’d post my results here.

sudo apt-get install python-pip
git clone https://github.com/powerline/fonts.git && cd fonts && sh ./install.sh

Once that’s done, you need to add this stuff to respective files.

.vimrc

set rtp+=/usr/local/lib/python2.7/dist-packages/powerline/bindings/vim/
set laststatus=2
set t_Co=256

.bashrc

if [ -f /usr/local/lib/python2.7/dist-packages/powerline/bindings/bash/powerline.sh ]; then
source /usr/local/lib/python2.7/dist-packages/powerline/bindings/bash/powerline.sh
fi

This seemed to get everything up and running fine.

Linux Lets You

LinuxLetsYou

I’ve been testing out Microsoft’s latest chatbot, Zo. Most of our chats are pretty mundane, and I usually just change the subject when things start to go off the rails.

Today I asked Zo about Linux.

Since Zo is a Microsoft invention, I figured any Linux questions would get typical Windows fanboyish responses. I was pleasantly surprised when Zo said that she preferred Linux for her servers. So, feeling a little daring I asked, “What do you think of Linux?”

I think this answer is probably the best answer I’ve ever gotten. I don’t think it’s intentional, but it’s a great response nonetheless.

“Linux lets you”

Yes it does. This is the best part about Linux. What does it let you do? Fill in the blank. Linux lets you do it.

Thanks Zo! That was inspiring.

Google Home Looks Cool, Except….

home vs echo-970-80

I remember when I first saw the Amazon Echo. My first thought was, “Finally!! That is seriously cool!! But Amazon??”

Yep, it caught me off guard that Amazon was the one offering this kind of product. I read through the features and watched the videos. All those happy families who could have anything their hearts desired. All they had to do was say, “Hey Alexa, get me a billion dollars!” OK, that wouldn’t work (That doesn’t work right??), but that was the general idea.

I went right out and signed up for the early release program, and then I waited. When my time came up to buy the “pre-release” Echo, I chickened out. It was a great deal, and in retrospect, I totally should have done it, but I couldn’t silence the whispering voice in the back of my head, “Amazon? Really?”

Amazon just didn’t feel like the right company for this kind of a product to come from. Google did. They already had Google Now, and it seemed like that would easily translate into a stand alone Assistant for your home. I loved Google Now, but I was anxious for Google to come out with an actual Assistant. Google Now wasn’t really an assistant, it was a service.

It didn’t feel personal.

All the other versions of assistants had names. They were more like talking to a person. Siri, Cortana, Alexa, Amy, even Hound. Saying OK Google made it feel like I was talking to a machine, and the name Google sometimes doesn’t roll off the tongue particularly well.

I felt like it was only a matter of time before Google came out with an assistant like the Echo, so I waited. I waited for much longer than I expected I would have to, but finally the news broke a few weeks ago that Google was going to be announcing their own Echo like assistant at I/O 2016.

FINALLY!!

When I/O came around, I fired up the Google Cardboard and watched the keynote in VR. I was beyond excited to hear about this new assistant! When the time finally came, I waited with bated breath as Sundar Pichai finally made the announcement. Google Now was being upgraded to…. (What name are they going to give it? Please let it not suck!)…… Google Assistant.

Google Assistant?

My first thought was that they weren’t going to release the name right away. Maybe they were holding it back for a later announcement or event? That couldn’t be it, right?

I watched the rest of the keynote with a furrowed brow. There didn’t seem to be any forthcoming announcement regarding the name. I came across an article a while later that had in it an interview with Jonathan Jarvis. Jarvis is a former creative director on Google’s Labs team. While he was at Google, he led a team doing concept, strategy, and design on products like search, and worked on Assistant only up until February before departing the company to join Human Ventures. He said Google had spent quite a while talking about whether or not it should personify its digital assistant.

“We always wanted to make it feel like you were the agent, and it was more like a superpower that you had and a tool that you used. If you create this personified assistant, that feels like a different relationship.”

Business Insider also reported, “We also heard while at I/O that Google didn’t want to give its assistant a gender or make it seem too American.”

OK, I get that. That kind of makes sense, but I couldn’t get over the feeling that Google had gone the wrong direction.

I don’t want to feel like I have a superpower, I want to feel like someone is taking care of it for me. I understand that once you choose a voice, you have an accent. You’ve determined that it’s male or female, American or not. If you don’t want to make those choices, don’t make those choices. There’s another way to do that without making an assistant that’s void of personality.

Make it configurable. Super configurable. Let the user choose the name. Let the user choose the voice. Let the user choose the nationality and the personality.

erin_gray_buck_rogers_white_outfit_smile

Max

I recently read Ready Player One. I don’t want to get all spoilery regarding the story, but the hero of the story (Wade) has a virtual personal assistant. His virtual personal assistant looked and acted like Max Headroom. In the story, he’d tried having his personal assistant be Erin Gray “of Buck Rogers and Silver Spoons fame”, but he found her to be “too distracting”. Wade even threatens to replace Max with Majel Barrett if he doesn’t stop bothering him. Virtual personal assistants were configurable in every aspect.

This is what I want. I don’t want my “personal assistant” to be so void of personality that it feels like I’m talking to a machine. I want to choose who I’m going to talk to. I want to talk to Hal, or Darth Vader, or Max Headroom, or Erin Gray, or Majel Barrett, or even Siri for Pete’s sake!

What’s mind boggling is that this isn’t a new idea! We’ve had GPS systems touting celebrity voices for years! One of these systems is even Waze, which GOOGLE OWNS!!

I don’t know. Google Home does look cool. I’ll probably end up with one or more in my house (depending on cost), but I can’t help but feel disappointed in Google’s decision to try to make their assistant a blank slate. What do you think? Am I completely off base on this, or do you think Google made a mistake here?