Monday, November 17, 2014

How does an open source .NET effect Xojo developers

As most of you know, I live in two worlds as a developer.  Yes, I develop cross-platform application but that's not the two worlds I'm talking about. Specifically, I'm talking about languages.

For most of the last decade, I've been firmly planted in the .NET work, using C# to build new systems and web applications that ran on Windows and, for the last few years, to build embedded applications for a variety of different purposes by using the .NET Micro Framework.  But that only got me part of the way there. While Microsoft describes the .NET framework as 'cross-platform', it really means 'Windows and a few platforms we officially bless' (like the .NETCF stuff I mentioned above. It did not mean Mac. It did not mean Linux (not officially, at least). So that meant that, if I really wanted to go cross-platform, I needed to look elsewhere.

For me, that elsewhere was Xojo. Xojo is a programming language that is very similar to VB.NET and strongly delivers on the promise of cross-platform. Write our code onces on either Mac, Windows, or Linux, compile it for any of the other platforms and it will run unmodified. It's the promise of Java only better. It's like having the best of both worlds: the ease of .NET combined with the cross-platform nature of Java.

As you might imagine, reading news about Microsoft open sourcing .NET gave me a reason to reflect on the future of Xojo and if I really wanted to hitch my wagon to a technology made by a small company that very well could be obsoleted within a year or two if the .NET train delivers on its promises and actually goes fully and officially cross-platform. Does the promise of working in one. heavily supported, language and ecosystem provide everything a cross-platform developer needs? Does this mean that there is a looming end in the near future for Xojo?

Absolutely not.

There are roughly three places where ,NET technologies consume Xojo nearly wholely:

1. Visual Studio - While it's technically possible to write .NET code without Visual Studio, the IDE is so good that you're a fool if you do. Visual Studio has features that cater to the needs of just about every developer and it's amazingly stable. Add to that a rich plugin ecosystem that extends the IDE and you have one of the best development environments in the industry.

Xojo, by comparison, isn't even close to being there. The IDE has performance issues on some platforms, doesn't have a rich plugin environment, and can be slow even on supported platforms. And with the announcement of the Visual Studio Community Edition (which is Visual Studio Professional, free), the Xojo IDE being a paid product adds another challenge that the company needs to address. You can develop entire solutions in .NET, using Visual Studio without paying a penny. With Xojo, you're going to pay at least $99 just to write for one single platform.

2. Stability - I'm not going to pretend that Xojo apps don't have stability problems. Xojo developers learn to compensate for those issues with a variety of tricks but, the fact remains, I would never choose Xojo for a large scale project that required rock solid reliability. My choice would always be C# and the .NET framework. Thankfully, Xojo has been laser focused on this issue lately and it's getting much better so this might not be a problem for much longer. But, still, for the time being, .NET wins on this issue.

3. Documentation - I don't think anyone can deny that MSDN has some of the best developer documentation in the industry. Xojo, by contrast, has a wiki that provides decent doc but not full. There are still some methods and technologies that the developer is left to figure out on their own with very little guidance and no examples from Xojo. This, like item 3, is getting better but it's not there yet.

Out of the three items listed above, I think items one and two present the largest challenges to Xojo's future in the face of an open sourced .NET.  Thankfully, item one can be fixed pretty easily: rewrite the IDE, focus on stability and performance on all platforms.  Sure, that will take time but it's not insurmountable.

Item 2, on the other hand, presents a larger problem for Xojo. The company behind the language has a history of moving too fast and not fixing things in a timely fashon. For example, they're hard at work on giving Xojo users the ability to write iOS apps but there are several stability issues on the desktop that have never been addressed even after multiple releases. If the language is going to compete, the company needs to change this and start to take stability seriously.  I'd love to use Xojo in enterprise app development, but I'd have to be crazy to do that right now. That needs to get better (and it is) fast.

So my final conslusion is that Xojo can compete with open sourced .NET. It's going to be a bloody battle and one where Xojo Inc is going to have to be ready to make some changes to win, but it's a battle that can be won. Xojo users are fiercely loyal. We want to believe in the company and we're willing to fight for it. That's important. Xojo Inc just needs to saddle up and go kick some ass.

Sunday, November 16, 2014

The easy way to work with JSON data in C#

A few years ago, the dominant way to exchange data between systems online was XML. XML can be a complicated and convoluted format to work with and, as with most complicated and convoluted format, it quickly fell out of favor as soon as something better came alone. That something better was a new format that was made for the web called JSON (short for JavaScript Object Notation).

JSON provides a simple, human readable, intuitive way to exchange almost any kind of data. Unlike XML, JSON data is small, easy to understand, and infinitely flexible. Best of all, working with JSON in most programming languages is amazingly simple. It's the perfect format for a modern world, populated with modern programmers.

Today, we're going to look at a topic I seem to get a lot of questions about: using JSON data in C#. Technically, JSON formatted data is called serialized and the process of turning into something you can easily work with is called deserialization. Today, we're going to look at how to deserialize JSON data. In the next article, I'll cover serializing data so that you can easily create your own web services.

First things first, C#'s native JSON parser sucks!

If you've done any research into working with JSON data in .NET, you've probably seen the built in .NET solution you know it's definitely not up to the task of everyday use. So we're going to use the open source JSON.NET library from James  King. It's an excellent library that's well supported and constantly updated. It's widely agreed that it's the best solution for working with JSON data. You can get it here.

How to deserialize JSON data in C# with JSON.NET

While there are a few ways to deserialize JSON data with JSON.NET, we're going to look specifically at what I consider the easiest way: modeling the data in a class then accessing class members as regular properties.

Let's say you have the following JSON string you want to process and you've stored it in a string variable called response:

{
    "Name" : "Tom Jones",
    "Age" : 26,
    "Spouse": "Jane Jones"
}

First, let's create a class that models the JSON data we want to process. It's amazingly easy:

public class JSONData{
    string Name {get; set;}
    int Age{get; set;}
    string Spouse{get; set;}
}

That's it. Just a getter and a setter for each JSON field we want to process. Nothing more needed. Now, let's access that data from our main class:

JSONData jData = Newtonsoft.Json.JsonConvert.DeserializeObject<JSONData> (response);

From there, we can access the different fields like this:

Console.WriteLine(JData.Name);
Console.WriteLine(JData.Age);
Console.WriteLine(JData.Spouse);

It's really that simple. Using JSON with the power of the JSON.NET library and C# makes a developers life amazingly simple! Now, there's nothing stopping you from going and grabbing data from any API on the planet.

Have fun!



Friday, November 14, 2014

Microsoft is setting .NET free and it's going to be awesome!

As most of you know, I'm a huge fan of the C# programming language.  C# is a language that retains everything that's good about C++ and Java but modernizes
things and makes actually doing things much simpler.  Unfortunately, while C# can be used on multiple platforms via the Mono project, it's not officially supported by Microsoft on any platform other than Windows.  Not only that but, in terms of features, Mono tends to lag a bit behind the "official" C# implementation because the Mono project is basically reverse engineering what Microsoft is doing and coming up with their own implementation of things.

That all changed yesterday.

Yesterday, at its annual Connect conference, Microsoft announced that it was moving forward with its plans to nearly fully open source the entirety of the .NET framework and work to enable it to run (in parity with the Windows version) on other platforms including Linux, Unix, and Mac. In addition, the company included a patent promise that says it explicitly agrees not to sue anyone using, changing, or marketing, the code. The patent promise is a huge step in promoting the adoption of .NET in the open source world since many people have been afraid to use it for fear of being sued by Microsoft. Thus, while we've had the ability to write software in .NET for years on other platforms, the promise that the technology offers has never been fully realized in the open source world because of the general distrust most people have of all things Microsoft.

Personally, I'm estatic about this move. When I was a Windows developer, my language of choice was C# and I truly came to miss it when I went full-time to Linux. With this move by Microsoft, I, and many developers like me, will once again have access to their favorite languges on whatever platform we decide to work on. This isn't just a win for developers though, it's a win for Microsoft who will, I hope, see .NET and C# simply take over the world. For the first time in over a decade, C# is a serious competitor to Java. It's always been a better language but Java has always beaten it in cross-platform. Now, the game is afoot, and Microsoft seems ready to become the tiger of development again and take on Java everywhere. I think we're about to see a bloodbath with developers, users, and Microsoft coming out the clear winners.

One of the things I'm particularly excited about is that open sourcing .NET could resolve some long standing security fears developers have had around the technology. Take the cryptography API, for example. Sure, you can easily do encryption using .NET but how secure is it? Might there be back doors that we don't know about? Open sourcing the code will close the door once and for all on those questions and could go a long way in restoring developer trust in Microsoft technology. It's certainly a bold move and I think it shows that the company is serious about competing across the spectrum.

Lastly, it should be pointed out that we won't see many of the benefits of yesterdays moves for a while. Microsoft is a big company and .NET is a huge technology with a lot of moving parts. It's going to take a few months to get everything planned out and deployed. But the company has already made moves by putting the source to the C# compiler and a lot of other parts of the .NET framework in a Github repository for anyone who wants it. That's right, you can go and download one of Microsoft's crown jewels right now and change the code or compile your own version.

Honestly, I like the new Microsoft. I think they're making all the right moves and have finally realized that the world doesn't have to revolve around Windows for them to survive, thrive. and kick ass, as a company. Now, we just need to talk about getting Visual Studio on Mac and Linux.  Too soon?


Saturday, October 18, 2014

Firejail: Sandbox individual applications at will on Linux

We take a lot of risks with our computers. Sure, we dutifully apply the latest security patches from our software and operating system vendors but those don't
account for yet undiscovered bugs that may be lurking deep within the heart of some of the most vulnerable software we use. The truth is that, if someone can compromise an Internet facing program on your computer, they basically have the same rights and privileges on your machine that you do and can access all of the files you have access to.

UNIXs and UNIX like operating systems have long had the concept of a 'jail' as a way to sandbox untrusted software away from the rest of the system. But it's generally difficult to set up and consumes a good amount of resources. That type of sandboxing is also best suited for servers and isn't as useful for desktop users.

Enter Firejail

Firejail is a new application sandboxing tool that allows you to quickly and easily set up a jail for any program you don't want to have access to your entire system. Using the concept of setuid, Firejail jails applications so that, if they are compromised, the attacker only has access to a very limited part of your system and is effectively blocked away from all other parts - even the parts that the user running the application has access to.

Firejail is also amazingly easy to use. While you can get a bit complicated with the way you configure the program, you don't have to. For example, running Firefox in its own jail is as simple as typing

firejail firefox

Seriously, that's it. This simple command sets up a jail with default restrictions and then starts the Firefox web browser inside that jail. Of course, the default jail might be too restrictive for some programs so you can customize what each program has access to by creating application profiles.

I've been playing with Firejail for a few weeks now and love it. Sure, it's not a guarantee that someone isn't going to breach your system and wreak havoc. But it's just another layer they have to get to in order to do so and, historically, jails have shown to be pretty secure.

So if you want to bolt one more layer onto your security, check out Firejail and see what you think! it's available for all Linxes and BSD's.

Saturday, September 27, 2014

ShellShock: Another indictment against the "many eyes make all bugs shallow" mantra of Open Source

A few months ago, the Internet was hit by one of the largest security vulnerabilities in its history: a years old bug in the OpenSSL cryptography program called "Heartbleed". The bug allowed any attacker who had the time to attack a server running OpenSSL and retrieve sensitive information like usernames, passwords, and other private information. When discovered, the bug was quickly patched and excuses were made. This was not an indictment of the open source model, advocates argued, but the fact that OpenSSL was underfunded and understaffed.

Earlier this week, another bug called ShellShock surfaced which has the potential to be just as big and destructive as Heartbleed. The bug has to do with the way the popular UNIX/Linux shell called bash processes variables. It allows anyone who knows how to exploit the bug the ability to run arbitrary commands on any server that's vulnerable to the bug. Unfortunately, 'any server' could possibly mean nearly every UNIX/Linux server deployed in the last 20 years.

Read that carefully: the bug is 20+ years old. It came into being in Bash 1.1. And it allows attackers to take over a vulnerable system and do pretty much anything they want with it.

We in the open source world like to tout that our software is secure because the code is available and people can look at it to discover and fix bugs. Popular developer and open source advocate Eric Raymond once said "with many eyes, all bugs are shallow". But we now have two examples in two major and commonly used pieces of software that have somehow missed being found for YEARS and, in one case, nearly TWO DECADES.

While I'm not intimating that these flaws are deliberate, one has to wonder how large flaws in such critical systems could go completely unnoticed for such a long length of time. It smacks of sinister influence by some organization, developer incompetence, or it says that nobody is actually reviewing the code.  If any of those are true, then it means open source is absolutely no less vulnerable to critical holes than its proprietary cousins and that should concern us deeply.

Saturday, June 28, 2014

Learn how to use threads in C#

For some odd reason, the article I wrote about loving C# yesterday sparked a flurry of emails asking me about threading. So I thought I'd write a quick and dirty tutorial on how to write multithreaded applications in C#. It's not going to be enterprise-class software, but it should give you a good enough idea to get you started.

First, what is multithreading?

Multithreading is the process of running multiple instructions on a computer at the exact same time. Today's computers often have 4-8 cores which means you can execute 4-8 instructions at the same time by using multithreading programming concepts.  That often (but not always) means your software will do things faster because it can chew through data and instructions at a more rapid rate that a single threaded application.

How about an example?

The code below is a simple example of how to write a multithreaded application in C#. It's not complex and you should be able to figure it out but we'll walk through it once we're done anyway.

using System;
using System.Threading;

namespace AMulthreadedApplicationExample
{
    class ThreadedProgram
    {
        public static void MyChildThread()
        {
            Console.WriteLine("Hello World from thread #2!");
        }
        
        static void Main(string[] args)
        {
            ThreadStart childthrd = new ThreadStart(MyChildThread);
            Console.WriteLine("Hello World from thread #1!");
            Thread childThread = new Thread(childthrd);
            childThread.Start();
            Console.ReadKey();
        }
    }
}
Much of this code should already be familiar to you as it's all part of a standard C# program. So let's look specifically at the MyChildThread() method and a few statements in the Main() method.

As you can see, the MyChildThread() method is nothing special. It's just a standard method that gives no hint that it's going to be run in a multhreaded way. All the magic happens in the Main() method and it's pretty easy to understand.

First, we define a ThreadStart delegate called 'childthrd' and pass the name of the method we want to execute in the new thread to its constructor. In the third line we actually create the new thread, passing the ThreadStart delegate in the constructor of the new thread. Finally, We execute the new thread with a call to the Thread.Start() method. This last line of code is what actually starts (or schedules to start) the new thread that executes the MyChildThread() method.

Once started, threads usually run to their end and then gracefully die. But there might be times that you want to explicitly kill a thread before it finishes executing. This too is amazingly easy. To kill a thread prior to its completion you simply use the Thread.Abort() method and it kills it. It's that simple. 

While this is not, by any stretch of the imagination, an exhaustive look at threads, it should be enough for you to start doing your own research. Threads are a powerful way to write high performing software and, once you learn them, you'll find all sorts of situations where they come in handy.

Happy multitasking!







































Friday, June 27, 2014

Finding "the programming language of your soul"

Most of you who've read this blog for any length of time know that I'm not a well-rounded programmer. I don't jump from one 'latest hot language' to the next and I'm definitely not up to speed on a lot of the languages that people are so excited about today. That's because I made a decision long ago to invest my time in only learning languages that I thought would be beneficial to my career in the long term. For the most part, that's meant Java, C++, C#, Python, and Xojo. I "know" other languages and can get around in them and I can learn other languages quickly, but those five are pretty much my bread and butter and I've made the most investment in mastering them.

Out of the handful of languages I've taken the time to learn well, there are two that have delighted and surprised me so much that I've labeled them 'the languages of my soul' (hat tip to Scott Hanselman for the awesome new term). Those languages are C# and Xojo.

For completely different reasons, each of these languages have captured my attention and made me feel at home when I'm working on them. And while their approach to most tasks are radically different, I never feel like I have to struggle against the compiler or the language (I'm looking at you C++) to get work done quickly.  Don't get me wrong, both of these languages have also caused enormous frustration in my life, but it seems like every time I have real work to do, I find myself reaching out to one of them.

You might ask yourself, what's so special about these two languages in particular and why should I consider them? There are a number of things I love about each; most importantly their approach to development. C# gives me the near raw power of C++ without all the nasty headaches, gotcha's, pointers, and garbage collection worries, while Xojo just makes the old Visual Basic developer in me happy making even the most complex tasks just a few lines of code away.

But you shouldn't consider any of that. Each of us will come to the languages of our soul in different ways. I know some people who feel a near spiritual connection with Java or Python, or Lisp. I don't understand it, but I have to respect that those are the languages they've bonded with. It's what works for them. So don't take my word and simply choose C# or Xojo as your primary languages (though both would make really good choices), go out and experiment and find the language you connect the most with. Maybe that's C# and Xojo, maybe it's not. But whatever it ends up being, you will never want to work in any language again.

That's how you know your soul has found a home.






Sunday, June 15, 2014

Saying Goodbye to Xojo on Linux: Breaking up is Hard to Do

I've been using the Xojo programming language for a few years now.  When I left Windows for Linux, I began looking for a decent, easy to use, cross-platform language that not only allowed me to compile my software for different platforms but also allowed me to write it on different platforms too.

Having been a C# developer for a number of years, I naturally gravitated towards Mono. Mono was good and handled a lot of what I was doing with very little problems. But I didn't like the 'Microsofty' feel to it or the fact that support for it appeared to be declining on Linux as a whole. There were also signs that Mono was about to start focusing less on the desktop and more on mobile development, which was cool, but not a good sign for my future desktop development projects.  So, pretty quickly, Mono was out of the game for me.

Then, one day, I was having a conversation with a friend and was complaining about my development woes. He was exploring Xojo (then called RealStudio) and recommended I try it. He gushed about the cross-platform nature of both the code you wrote and the IDE and, pretty quickly, I was hooked. Sure, there weren't a lot of contract jobs asking for Xojo developers, but I didn't do a lot of contract work anyway. When you sell software for a living, most people don't really care what language it's written in. So began my love-affair with Xojo - a love affair that continues to this day, mind you.

You might be asking yourself: if he loves Xojo so much, why is he abandoning it on Linux? Good question and there's an equally good answer for it: on Linux, Xojo is basically unusable. 

I'll be the first to say that Xojo has always had its share of bugs. It's not nearly as mature as languages like C# or IDE's like Visual Studio. But it's been totally usable for a number of years. Then, I started to notice problems on the Linux side of things. First, the 2012 IDE would simply segfault and crash when started on Linux, then exceptions would fall through and be impossible to catch, the compiler would declare variables as 'undefined' even though I'd defined them two lines before and within the current scope, etc.  All in all, it meant that I often had to hang behind a version or two because something was always wrong with the current one. 

When I heard that Xojo was completely rewriting the IDE and improving some language features, I was excited.  This was where they're going to improve things to a more usable state on Linux, I thought. This was finally going to be us Linux devs time to shine and software written in Xojo ON LINUX would flood the marketplace.

Not quite.

Xojo released a stellar IDE with a lot of fixes for problems (exceptions work reliably and the IDE doesn't segfault!) Unfortunately, new bugs were introduced and it doesn't really look like Xojo is that interested in fixing them for Linux users. The showstopper for me is the IDE speed problem. The short of it is that, as you use the IDE, it continues to get slower and slower. After a few minutes, you'll type a whole line of code before it even shows on your screen. Moving controls within the visual editor is painfully slow too with controls often losing focus while you''re moving them and dropping back to the form.  I honestly can't even begin to tell you if there are compiler bugs in the Linux compiler because, well, I can't actually write enough code to test it out.

So I'm abandoning Xojo on Linux. I might install it in Windows and use it to compile cross-platform but I'm giving up on my dream of having Xojo run smoothly enough on Linux where I can do my primary development there. For Linux, I'm going back to C# and Mono. It's not my preferred choice but it works and it's stable.  I hope that Xojo takes a stronger interests in us Linux devs and actually fixes the issues it has on the platform but it just doesn't seem to be a huge priority for them.

Now, what I wrote above sounds like an awful lot of complaining and it is. Xojo is asking professional developers to spend upwards of $1000 or more for a product that you will find unusable when you install it on Linux.  I don't completely fault Xojo for this though.  Xojo use on Linux is probably a minuscule part of their user base. Mac seems to be tops with Windows being second. They don't make a lot of money on Linux and there is no compelling reason for them to invest engineering time and resources into making it a world class product. There are a number of reasons most Linux devs don't use Xojo and stability has nothing to do with it. So, even if Xojo fixed the problems, they still wouldn't make money and I definitely don't expect them to spend time and money for such a little payout.

Still, I'm a bit sad. I really like Xojo and I love how fast it is to develop with. Perhaps I'll buy a Mac and use that as my development machine just so I can enjoy Xojo.  But I sure wish I didn't have to. Maybe it's a case of wanting my cake and eating it too. Maybe I want too much from a company that doesn't make much money from my platform of choice. Perhaps I and my choice are the problem.

But perhaps...just perhaps...it will get better.


Sunday, June 1, 2014

It's time to Reset the Net

Last July, Edward Snowden blew the lid off of the largest government mass surveillance program in history. The classified internal NSA documents Snowden leaked showed a government that is out of control, drunk with power, and with a total disregard for the law or Constitution.

Through the leaks, we saw a picture emerge of a National Security Agency hell-bent on collecting every bit of information about our lives and, in particular, our lives on the Internet. Some estimates say that the NSA was/is capturing up to 75% of Internet traffic flowing through and within the United States. And it doesn't matter if it's traffic from a US citizen to another US citizen or not. They don't care; they want it all.

That's why it's time for a reset. We need to stand up and take direct action against mass surveillance and take concrete steps to protect our privacy and the privacy of our fellow citizens. Large companies like Google and Microsoft have taken steps, but they mean nothing if we users don't do the same.

On June 5th, Fight for the Future is sponsoring an Internet-wide event called "Reset The Net" that will help spread the word about tools we can use to fight back. While we can't fight targeted surveillance, we can put a stop to the sweeping gathering of everyone's information.

If you're a website owner, it's important that you participate. Please visit the Reset The Net homepage and sign up to get involved. Participating is simple and easy and has the potential to make an enormous difference. Let's face it, the politicians we've elected can't be trusted to protect us. It's up to us and we have the tools to do it.

So let's do it! Let's "Reset the Net"!


Thursday, April 10, 2014

Why fraud is so easy on the Internet

Last week, a fairly large number of email addresses associated with customers of the popular Bitcoin service Coinbase.com leaked to the Internet.  Since then, a number of phishing attacks have been launched against those email addresses in the hopes of stealing user login details and gaining access to the millions of dollars in Bitcoin stored in Coinbase user accounts.

Tonight, I received such an email and thought I'd follow it to its logical conclusion. I traced IP addresses and found that

1. The person sending the email originated from China.
2. They used a server at GoDaddy.com to send the phishing email
3. They put a fake Coinbase.com website up at the Aplus.net hosting provider

This seemed pretty cut and dried. I'd call these companies, file reports, and they'd crack down on the fraudsters immediately bu closing the associated accounts.

That's not even close to what happened.

First I called APlus. Even though I had the URL of the fraudulent website that was sitting on their servers, I was told there was nothing they could do. "We can't just go and shut down a website based on a complaint' is what I was told. Even though the complaint could be backed up with proof on a server THEY controlled? Yep, sorry, can't help.

Next, I called GoDaddy. They are the worlds #1 domain name register and hosting provider. Surely they would do something. Nope, the couldn't do anything either. In fact, I was told by the agent I spoke to that they couldn't do anything until the authorities told them to take the site down! Really?  What if I was reporting a site streaming live child porn, I asked. That's different. How? They are both crimes and GoDaddy's server is being used to facilitate that crime. Why is one different?

The rep at GoDaddy wasn't done though. He told me that my complaint was like 'calling Ford and reporting seeing a Mustang speeding'. Sure, except there is nothing Ford can do about a random Mustang speeding and there is everything GoDaddy can do to stop their server from doing illegal things.

In the end, I send abuse reports to both APlus and GoDaddy. I'm sure 'something' will be done eventually but how much money will be stolen in the next few hours before these two complicit companies get off their behinds and decide it's actually worth doing something?  It's responses like these that criminals depend on. They know these companies simply can't be bothered to do anything until something bad happens. So, while they don't expect it to run long, they know it will likely be at least a little while and they will make a little (or a lot) of money before the companies do something.

GoDaddy and Aplus should be absolutely ashamed. If their 'policy' is to do nothing then their policies need to be changed. I am ashamed to say I am a customer of GoDaddy. Their callous attitude towards the abuse of their server is unconscionable and needs to be rectified. Until they do, I would encourage anyone who is a customer of either GoDaddy or Aplus to go elsewhere. Policies will change when the money dries up. WE control that. 

Thursday, March 20, 2014

Help me raise $500 to support free speech worldwide?

Anonymous remailers have been around for over 25 years, providing people with a way to raise their voices completely anonymously and untraceable. Unfortunately, over the last two decades, the remailer network has suffered from lack of interests, decreasing technical understanding, and a whole host of other problems.

A small group of us who are passionate about free speech are trying to revitalize the remailer network, bringing up more remailers to make the network more secure, making them easier to use, etc. Right now, I'm trying to raise $500 to bring up a few new remailers to put into production within then next week. The more remailers we bring up, the stronger the anonyminity.

Can you help me raise that $500?  I'm asking those of you who are interested in privacy and free speech to donate whatever you can to this cause. I don't care if it's a single dollar, it will help us pay for the systems that allow us to run new remailers. Whatever you can do, your help is greatly appreciated.

To donate via Paypal:
Send your donation to remailers@cpunk.us

To donate via Bitcoin:
Send your donation to: 1H3eXerEQMqodTXRgLGnM1GUpLYCXBTF1e

Thank you for whatever you can do!

- Anthony

Saturday, February 22, 2014

OpenSSH: the Swiss Army Knife of Network Tools

Like many people involved in tech, I've used the SSH tool a lot over the years.  But I've mostly just used it in the 'plain vanilla' way to securely log in to remote machines. Today, I decided to dig deeper into OpenSSH (the standard SSH program for Linux/UNIX) and I was completely blown away!

OpenSSH is amazing. It's the Swiss Army Knife of network tools. Using this simple little program you can:



  • connect securely to a remote machine using an encrypted connection
  • create a VPN like service without all the fuss of OpenVPN
  • access your UNIX/Linux programs from your Windows and Mac machines
  • get around port blocks that your ISP enforces (think: port 25)
In my post today, I'm going to discuss the four points above and show you how simple doing those things really is. I think that, when we're done, you'll likely be chomping at the bit to get OpenSSH setup and running on your systems if it isn't already.

The 'blah' stuff: connecting to a remote machine using OpenSSH

This is probably the way most of us have used OpenSSH in the past. We've got a remote server at work, home, or a VPS and we want   to connect to it and manage it securely. Doing that is incredibly simple:

ssh username@hostname.com

That's all it takes and you'll be presented with a login prompt (or asked for your SSH key passphrase, depending on how you've set things up). From then on, everything you do over that connection will be encrypted and completely safe from prying eyes.

I want my own personal VPN but OpenVPN is too hard to set up!

No problem, OpenSSH can give you VPN-like functionality without all the fuss that OpenVPN entails. I've set up OpenVPN in the past and it's not a fun task. And it's a complete waste of time if all you want to do is browse the web and check email without your ISP or anyone else knowing what you're doing.  OpenSSH makes it easy using the -D option:

First, establish a secure connection with the remote SSH server using the -D command line option. You will pass only one additional thing: the local port you want your proxy listening on. This is the port you will tell your local applications to connect to in order to route traffic through your remote system:

ssh -D local_port_to_listen_on remote_username@remote_hostname.com

As before, you will either be presented with a prompt asking for your password for the remote machine or your passphrase to your SSH key. Provide this and you will be logged into the remote machine as normal. But here's the cool thing: OpenSSH is now listening on a port on your LOCAL machine too, ready for you to route traffic through that port. When you do, it will send it over the encrypted connection to the remote machine, where it will exit onto the Internet. ANY application that can use SOCK5 can route its traffic this way. This includes Firefox, Thunderbird, most IRC program, and most other major internet programs.


Anyone watching your connection will see you emerge from the remote machine and not your local one. Also, your ISP will have no idea what you are doing. Take that AT&T!






Wednesday, February 19, 2014

Why Mozilla putting ads in Firefox isn't such a bad thing after all

The Mozilla Foundation announced a few weeks ago that it would begin including sponsored content in some of the tiles of its popular Firefox web browser. This, of course, immediately brought out the pitch-fork bearing zealots who insists that Mozilla is compromising the soul of Firefox and starting down some slippery slope that nobody seems to be able to define. While I'm not excessively happy about Mozilla's decision, I'm a lot more comfortable with it after reading the Chair of the Mozilla Foundation, Mitchell Baker, explanation of  their plans.

What are those evil plans, you ask? Well, not so evil, it turns out. Basically, Mozilla plans to hand pick sponsors to advertise in one or two of the six tiles on the new tabs you open. The ads won't have tracking and the tiles can be completely turned off if you want to. Mozilla isn't going to share any information about you or your browsing habits with their advertisers either so the ads are completely benign.

Some people have been surprised by my support of Mozilla's plans and, to be honest, I was a bit torn when I first read about it. But I believe Mozilla has earned our trust over the years. They've walked the walk and talked the talk. They've defended openness, they've defended user rights, and they've been one of the free Internet's strongest advocates. I see no reason not to trust them with this.

We should also consider that the relationship between Mozilla and Google will be ending soon which will take a significant amount of revenue from Mozilla's budget. That money has to be made up in order for the foundation to continue to function and this seems like the most logical way to do it. In fact, it might bring in more money to the foundation than the Google deal did allowing Mozilla to do even greater and better things.

All-in-all, I'm having a lot of trouble finding a reason to worry about this. I don't believe Mozilla has suddenly become evil or is selling users out. And if they ever do, someone will fork the browser and carry on where Mozilla left of. That's the beauty of open source.

Relax.



Monday, February 17, 2014

The Sad State of the Ubuntu Community

The Ubuntu project has always been about community. Since the beginning, Canonical has tried to create a strong, vibrant, healthy, community around Ubuntu and the Ubuntu local teams (called "LoCo's") were a great way to do that. The idea was that support should be localized. There should be someone physically near you that you can turn to when you have a question or problem you need resolved. Sure, mailing lists, forums, and IRC, are great, but they don't come close to having someone right there with you.

Unfortunately, according to a recent LoCo Census, the local teams are in horrible shape. Some of the ones the census polled didn't even respond to their request while others did but are barely functional. My own LoCo in Oklahoma hasn't seen a mailing list post in almost a full year.

It's a horrible state and it's only getting worse.

We could sit and ask ourselves "what happened" as I'm sure Canonical is doing while they seek new ways to revitalize the community. But I think a better and much more relevant questions "what can we do to fix it"?  With the debacle around Windows 8, now seems to be the perfect time for Ubuntu to show a strong presence in local communities. I believe this is especially true in smaller, rural, and poor, communities, where modern computers and software are needed but the costs associate with moving from the soon-to-be-executed Windows XP to Windows 7 or Windows 8 are simply too prohibitive. Those people are the perfect market for Ubuntu and its derritivites and the LoCo's should be the tip of the spear of any effort to reach out to them.

So how can we fix it? I don't have the answers. But I think we seriously need to begin having some open and frank discussions with our community members. We need to find why they stopped caring, where did their passion go, and how can we re-excite them about getting Ubuntu into the hands of their local community again. Maybe the answer is financial incentives from Canonical for good performing LoCo's, maybe it's swag, or something else. Whatever it is, we need to find it, implement it, and push hard to get things moving again.

We still love Ubuntu. We still believe it's the absolute best choice for users coming from the Windows or Mac worlds and it's dead simple for even the newest computer users. But we have to get people on the ground who are willing to get their hands dirty, get active, and push us forward.

What do you think the answer is? Ideas?




Sunday, February 16, 2014

Ubuntu and Mark Shuttleworth: Setting an Open Source Example

You wouldn't think an initialization system would cause a war.  But, for over a year, the debate between the systemd init system and Ubuntu's upstart system has been dividing the Linux community, spawning hundreds of posts on blogs and social media, and fanning the flames of a good old open source war.  At times, it got brutal, with some resorting to name calling and trash talking.  People were invested, often heavily, in one camp or another and there were numerous, solid, technical reasons to adopt either system.

In the end, it came down to a decision by the Debian project. When they announced that they were going to use systemd instead of upstart in the next release of the operating system, it solidified the fact that upstart had, as valiantly as it had fought, lost the war for mind-share and support. systemd would be the init system of the realm and anyone using upstart would ultimately be the odd man out.

With so much passion flowing on both sides, you would think that the decision would have been met with some hostility by those who support upstart. These are people who'd devoted years of their lives to designing a powerful system that how seemed to be being simply tossed away. And, in the open source world, an all-out bloodbath might have even been expected. But that's not how it went down and I'm glad it isn't.

In the end, Canonical founder Mark Shuttleworth posted a very gracious article discussing the issue and, while praising the efforts of those involved in the upstart project, conceded defeat.  The post was titled 'Losing Graciously' and it was exactly that: someone who'd poured money, time, effort, and man hours, into a project that just didn't work out. It was a great show of class and what is possible within the community when people can step out of their personal camps and focus on what's best for the community.

Some have said that the creation and death of upstart showed that Canonical wasted a lot of time. I don't believe that. I believe systemd is a better system because of upstart. It forced the developers to up their game because they were up against developers who were hell-bent on creating an even more powerful and awesome system. Sure, systemd is a great system but I think it's better because upstart offered it a serious challenge just when it needed it.  So I think instead of seeing upstart as a waste of time, we should honestly view it as yet another contribution that Canonical and the Ubuntu project have made to the community. Thank you, Canonical.

Winning isn't always everything. Sometimes, losing is a contribution in itself.


Thursday, February 6, 2014

AnonyMail 2.0.33 Coming!

Since the last release of AnonyMail, I've been fortunate enough to receive a lot of feedback from users expressing concerns, filing bug reports, and putting in feature requests. I've been listening to everyone, squashing bugs, and picking the best features requests for the next version of AnonyMail, 2.0.33. I'm happy to say that we're at the cusp of a new release and I think you're all going to like it!

The new version of AnonyMail features better Tor support, no Python requirement (I use the cURL library to route message through Tor now), an improved user interface, a feedback mechanism, better stability on Windows, and improved message padding and delivery.

We should see a release within a day for Windows and Linux and it should in the Ubuntu Software Center soon after.

What is AnonyMail?

Some people have asked me what AnonyMail is and why they should use it. Let me explain:

There are times when you might need to send a completely anonymous piece of email. There are few ways to do this: 1) you could set up a fake webmail account somewhere or 2) you can use an anonymous remailer to send mail completely anonymous.

There are a few problems with each of those option:

  1. Using  a fake webmail address may protect you from the recipient knowing who you are but the webmail provider still knows. To remedy this, you could use something like Tor Browser Bundle to connect to the web mail provider but more and more of them are completely blocking connections from the Tor network making it nearly impossible to use the software to hide your identity.

  2. Anonymous remailers, while absolutely rock solid on anonymity, are hard to use. You have to have an intimate knowledge of cryptography, set up PGP. generate a key pair,  obtain the public keys for the remailers you intend to use, and then properly encrypt a message so that it will be delivered. Whew! I got winded just typing all of that!
Neither of the two above options are very easy to use, reliable, or user-friendly. That's where AnonyMail comes in. AnonyMail is like an anonymous remailer only easier to use. Because you can route AnonyMail connections through Tor, you can be completely anonymous even to us so there is no chain that someone seeking to uncover your true identity could follow.  This makes AnonyMail particularly well-suited for whistleblowers, secret crushes, anyone else needing high anonymity but ease of use.

Why isn't AnonyMail Open Source/Free Software? How can I TRUST you!?!

Whenever you're using security software, especially to express unpopular, controversial, or sometimes illegal speech, it's important that you're able to trust that the software isn't selling you out without you knowing it. As such, the recommended security advice is usually 'don't use closed source software' and I completely agree with that.. That said, AnonyMail is closed source software.

Because I'm a commercial developer and make my living off of software like AnonyMail, I can't take the gamble on donations that many open source developers have. I have to put food on my table, pay the bills, and still have a little money for beer (or Dr. Pepper, in my case) when I'm done. So I've come up with what I believe is a fair compromise that allows users to be able to trust AnonyMail while allowing me to keep it closed source.

I give you the source code.

Whenever you purchase AnonyMail, you get both the binary (precompiled version that you can install on your computer) and the source code which you can review and/or compile yourself. This means that, if you don't trust me, you can take the source code and create your very own installation of AnonyMail with the confidence that I've not slipped anything nasty in there.

So how do I protect my income while giving away the source code? Simple: AnonyMail is not open source or free software. You don't get the right to share it with your friends, rebrand it and create a competing project, or do anything else you can do with open source and free software. You get the right to review the source and compile a copy for yourself. That's it.

Personally, despite the howling objections of many people in the open source community, I find this a nice balance between security and revenue generation.  

AnonyMail is available for Windows, Linux, and (soon) Mac.

I'm always working to make AnonyMail better so please feel free to shoot me an email with your ideas and suggestion. You can even use the new version of AnonyMail to do it so you can do so totally anonymously.  Every new version of AnonyMail includes fixes and improvements that come directly from users just like you so don't be shy to email and share your thoughts!

Wednesday, January 8, 2014

PATTS 2014 Coming in April!

It's been quite a while since we've issued an update to our popular group home management system.
PATTS was designed back in 2011 and, for the most part, has remained untouched for the last three
years, mostly because there was no reason to do any major updates. But I'm pleased to announce that we're hard at work on PATTS 2014 and should see a release sometimes in April or May.

PATTS, which stands for Paperwork, Analysis, and Trend Tracking System, is an application that enables group homes to easily manage their paperwork, spot developing behavioral trends before they get out of hand, and easily comply with state and federal laws regarding record keeping and privacy.

PATTS 2011 was a large and complex application that required a fairly heavy and costly IT infrastructure to run. This largely priced out smaller organizations who needed a solution like PATTS but simply couldn't afford it. We've addressed this issue by transforming the system from a 'heavy' application that had to be installed on laptops and tablets, to a completely web based one that runs in the browser on inexpensive tablet computers. 

PATTS features include:
  • Effective management, search, and cataloging of both administrative and shift paperwork
  • The ability to set individual behavioral thresholds to spot developing resident trends early
  • An overall view of what behavioral trends are developing among all residents and/or staff
  • The ability to easily look at historical data to make better decisions for the future
  • Full HIPAA compliance 
  • Hosted and on-premises solutions
Our goal is to revamp PATTS so that it meets the needs of even the smallest group homes. We're also partnering with one of the most respected technology vendors on the planet, Microsoft, to make sure both our hosted and on-premises solutions are robust, stable, and secure. We're excited about the future of PATTS and we're looking forward to seeing where that might lead.

More details to come soon. This was just my 'commercial' to get you ready for the new hottness.

Monday, January 6, 2014

Can Microsoft Continue to Meet Enterprise Needs?

Business Insider published an article today about Microsoft losing the City of Boston as a app cloud
client to Google.  While this is certainly not the only major customer Microsoft has lost to Google's
apps-in-the-cloud service, the city marks one of the largest moves away from Redmond's offering and into the arms of the Borg.

Surprisingly, the decision for the city to move to Google wasn't based on price in this instance since both Microsoft and Google charge pretty close to the same for their various offerings.  According to Bill Oates, CIO for Boston, the decision was based on the city's belief that Microsoft could no longer meet their fast paced, changing, needs for a secure cloud environment.

In other words: they don't trust Microsoft. And why should they?

Over the last few years, the company has played a dangerous game of trying to straddle both the enterprise and consumer software worlds.  Some of those gambles, like Office 365 have paid off, others, like Windows 8, have not and the failures have began to shake everyone's confidence.  Until Microsoft decides what it wants to be when it grows up, I suspect they will continue to lose customers to hard-core enterprise companies like Google who have a more unified and directed approach to their software and web offerings.

Wake up, Microsoft! You can't straddle both worlds for long. Jump into the enterprise hard and start to win back some customers. You're playing a deadly game and, right now, you're losing!