Archive for the ‘Developer’ Category

Will work for Internet Points!

February 3, 2014

For a year and a half I’ve been helping solve problems, write samples and clarifying questions to make them easier to answer. It’s not my day job and it doesn’t even pay peanuts. It pays me in something even less tangible … internet points! (more…)

Browser Profiles – an excuse to play with Chrome Extensions

December 27, 2013

Like many people I use a laptop that I carry from home to work and back again. That coupled with browser preferences syncing to my other machines means all my bookmarks and extensions travel everywhere with me.

At work (or on our VPN) there are certain intranet sites I can access that are not public, so I’d prefer not to see them if i can’t click on them. There are also some browser extensions that I don’t want to run at work because they are not on our IT departments approved list. This means that either I have to stop syncing settings, or use a different browser for work… or come up with a smarter solution. (more…)

Keeping a-head in the clouds

November 19, 2013

One of the great things about developing on today’s cloud platforms is elastic computing. You never know what the peaks are going to look like, but you don’t want to pay for hardware you’ll only use once in a blue moon. So you opt for a dynamically adaptive scalable solution.

If you’re read any of my posts about jsErrLog (or “jsErrLog posts” if it’s still down) you’ll know that’s what I did for that service. As I’m offering it for free to anyone with a reasonable load I needed something as cost effective as possible (ie free!). When I built it I looked at Windows Azure, Amazon’s EC2, a few smaller options, Virtual Private Servers and finally settled on Google AppEngine - in common with the others it offered a number of options for programming languages and data storage but the big bonus was a no-nonsense the free tier.

Sometimes however things don’t go quite as planned…

(more…)

Change the conversation – don’t play the numbers game.

July 12, 2013

For new entrants to the phone or tablet market the conversation always turns to how many apps there are. At launch, a year later, how fast the numbers are growing. The conversation is driven by the incumbents and echoed by the press and makes it very hard for a newcomer to be taken seriously.

What would happen though if a new entrant to the space, such as Mozilla with the Firefox Phone, decides not to obsess about the numbers game, but own the narrative and re-write the rules…

If you play the numbers game means you are spread thin, chasing a huge catalog and will constantly be behind the ball playing “me too” and catch up at the mercy of the big fish who probably don’t see you worth the effort until you have an established presence.

Defining your rules allows you to identify a small selection, maybe a dozen, of apps that users want, need or actually use as a base line and expend significant effort working with those partners to create the best version of their experience on your platform.

You help with engineering, dollars and resources, providing money, talent and demonstrating true partnership. Engage deeply with your partners and share the risk – you both need to comfortable enough to experiment with new features on your new platform, to iterate and fail fast but within that small group drive their success while establishing your new platform and demonstrating what is possible.

For most of the incumbents this isn’t the way they play the game. Apple dictate to partners secure in their position, Google, with Android, rely on OEMs and the scale of their store to drive developers. Microsoft have a huge field Evangelism organization who can wield marketing dollars but are chasing numbers and have quarterly goals to meet and don’t seem to have the patience for long term engagements any more. BlackBerry are desperately copying any playbook that seems to make work but are finding resurrecting their brand hard going.

For a new player it’s a losing proposition to try and get into their race. Even if you launch with 50 thousand apps there will be the issue of quality and questions around the presence of the “must haves” who won’t have taken the risk, and every omission will hurt. If you make the headlines read “Twitter launches their next generation client on Firefox OS“, “Evernote delivers game changing update first for Firefox OS” you can control the conversation.

By controlling the conversation you become a platform that is aspirational and seen as innovative.

That is where technology evangelism has to return too, not being driven by the same old marketing and PR story that is seen as safe conventional widsom

Lazy developers make for bad user experiences

March 18, 2013

As a developer I can appreciate that dealing with user input is a pain. Dealing with anything messy humans do is always more annoying than handling nice clean inputs from an API. Developers and designers are human too, and they should think about the experiences they are creating, and how a little bit of consideration for the user can turn a frustrating process into a moment of delight.

  • Required fields: Indicate visually when a field is required, and ask yourself if the field is actually required for what the user is trying to do (delight them and they’ll come back and share more information incrementally). Especially in a world of security leaks I like to minimize what I share and you should help with that.
  • Formatting (phone and credit card numbers) is irrelevant: Should I enter my cell as (425)-555-5555, 4255555555, 425 555 5555 or something else? Actually all of those should be valid as it doesn’t take much effort to strip out spaces, dashes and brackets when you’re validating a credit card or phone number. If you need a particular format for your database or display then re-format it… but don’t force the user to comply with a rigid structure to make your life easier.
  • Don’t be redundant: Don’t make me tell you what type of credit card I’ve entered the number for. Using a simple issuer lookup you can tell me if I just entered an Amex or a Mastercard. If you need me to write an look-up API for you I will, just leave a note in the comments.
  • Passwords are a pain to remember: Just because you think the password rules on your site are obvious (at least one capital, one digit, only special character is an underscore and it must start with a different letter than your username) users have lots of passwords. Give them a reminder next to a where they have to enter it what those arbitrary rules are, ideally on initial entry and as an absolute must if validation fails.
  • Don’t ask me the same thing twice: In the US a ZIP code can tell me the City and State. Same in Australia or New Zealand or the UK and pretty much anywhere else. Can anyone explain to me why I have to enter both 90210 and Beverly Hills, California on a million forms? By all means display the City/State for me to confirm but don’t waste my time asking me to do a computers job. That thing I said about look-up APIs earlier, still true
  • Don’t be forgetful: Computers are good at remembering stuff, if developers are not being lazy. If I fill in a field or check a box on a form and something goes wrong with validation the only field I should reasonably be expected to re-enter is the password (and if you validate that and it passes assume I know my password and don’t make me rekey those asterisks again). If I checked “accept Ts&Cs” or “Don’t email me crap” the first time… I probably meant it so don’t forget it because I didn’t get my phone number in exactly the format you like.
  • On-the-fly, context sensitive validation is awesome: Make use of onchange and onblur events and Ajax to check each field as I go to save the user scrolling up and down a page to find what failed. Basic validation, like credit card checksums, for fields that are easy to miskey should not require a full form submission
  • When things go wrong, show me: When you finally get to a full round trip validation and have to show the user some errors you need correcting don’t just bundle some obscure messages at the top of the page – use visual cues and clear explanations to guide them to get it right

By making the process simple and eliminating points where the user can stumble your helping ensure that your form is not a roadblock where the user might get frustrated and abandon the process. When you go to the supermarket you look for the shortest line, or the easiest way to checkout, and you get frustrated if the process isn’t smooth. It’s just the same on the Web.

Even if you think you’ve gone beyond the things I mention above have you gone far enough? Are you watching your logs and other telemetry to see what fields users are stumbling on? Could you streamline the process further?

Your challenge: As designers and developers you should embrace the opportunity to streamline your users experience, and use every tool at your disposal to deliver a great user experience.

GUIDs in JavaScript

July 14, 2011

Update: From the comments below it looks like I arrived at the same solution as someone else had  come up with earlier. Recommend you check out the Broofa.com code as they have done more work on making it more performant and robust.

—-

 

 

A while ago I needed a quick and simple way to generate a GUID in a JavaScript project but most of the examples that I could find were either slow, cumbersome or didn’t always pass GUIDs that would pass verification, so I had an attempt at writing my own that had to be performant, small and robust enough to use in a real world environment at scale.

 

Well, after generating 50 million GUIDs across all the mainstream browsers (and some pretty obscure ones!) in my other logging system (an internal project, not jsErrLog – though it’s used there as well) I’m happy that it’s behaving well enough to share so with no further ado…

 

function guid() { // http://www.ietf.org/rfc/rfc4122.txt section 4.4

                return ‘aaaaaaaa-aaaa-4aaa-baaa-aaaaaaaaaaaa’.replace(/[ab]/g, function(ch) {

                                var digit = Math.random()*16|0, newch = ch == ‘a’ ? digit : (digit&0×3|0×8);

                                return newch.toString(16);

                                }).toUpperCase();

}

 

Regular expressions, nested functions and logical operators… probably the most I’ve every crammed into that few characters though if you’re really obsessive you can crunch it down even further to one line at the cost of readability:

 

guid=function(){return”aaaaaaaa-aaaa-4aaa-baaa-aaaaaaaaaaaa”.replace(/[ab]/g,function(ch){var a=Math.random()*16|0;return(ch==”a”?a:a&3|8).toString(16)}).toUpperCase()};

Let Frebber make your FREB files easier to handle

June 16, 2011

If you have used IIS for any length of time you have probably come across the term FREB. If you don’t know what it is then you should read this great introduction to Failed Request Tracing in IIS. It’s applicable to IIS7 and above and is a great tool.

At a high level FREB produces an XML file containing details of errors you are interested in – you specify the error code you want to trap, the execution time threshold or a number of other filters – and provides a wealth of information about what was happening under the covers in IIS.

The problem with FREB Tracing though is that it’s very easy to end up with a folder containing hundreds or even thousands of error reports – all named a variant on fr000123.xml – and you have no way to quickly tell which where the ones with details of 401.3 errors, or which ones failbed because they took more than 5 seconds to execute.

Well, thanks to the wonders of powershell there’s now a simple solution.

Frebber scans the output directory where your FREB logs are stored and copies the files into a new subdirectory (called .Frebber of course) while at the same time renaming the files based on the nature of the error report they contain.

For instance fr000012.xml may contain details of an HTTP 415 error and took 2571ms to execute, so the file would be renamed 415_STATUS_CODE_2571_fr000012.xml

It’s a fairly simple script and if you have a look at the XML format inside a FREB report you’ll be able to see how to adapt it quickly to your particular needed. Meanwhile feel free to use the example below, and I’d love to hear any comments or suggestions in the comments.

Oh, it does make one pretty big assumption… that your FREB files are going to the default directory. If that’s not that case then you will need to modify that line (I might get around to making the script more complete and add parameter for source and destination directories and some renaming selection criteria but right now this works pretty well for me

$frebDir = "c:inetpublogsFailedReqLogFilesW3SVC1"
echo "Frebbering...."
$fileEntries = Get-ChildItem $frebdir*.* -include *.xml;
$outDir = $frebDir + ".Frebber"
# Create the directory for the Frebberized files
$temp = New-Item $outDir -type directory -force
# copy in the freb.xsl so you can still view them
Copy-Item ($frebDir+"freb.xsl") $outDir
$numFrebbered = 0
foreach($fileName in $fileEntries) 
{
    [System.Xml.XmlDocument] $xd = new-object System.Xml.XmlDocument
    $frebFile = $frebDir + $fileName.name;
    $xd.load($frebFile)
    $nodelist = $xd.selectnodes("/failedRequest")
    foreach ($testCaseNode in $nodelist) 
    {
        $url = $testCaseNode.getAttribute("url")
        $statusCode = $testCaseNode.getAttribute("statusCode")
        $failureReason = $testCaseNode.getAttribute("failureReason")
        $timeTaken =  $testCaseNode.getAttribute("timeTaken")
        $outFile = $frebDir + ".Frebber" + $statusCode + "_" + $failureReason + "_" + $timeTaken + "_" + $fileName.name;
        Copy-Item $frebFile $outFile
        $numFrebbered +=1
    }
}         
echo "Frebbered $numFrebbered files to $outdir."

jsErrLog: now alerts via XMPP

June 13, 2011

Although it’s nice to know that the jsErrLog service is sitting there recording errors that your users are seeing it does put the onus on developers to remember to check the report page for their URL to see if there have been any issues.

To make things a little more pro-active registered users can now connect to an XMPP (Google Chat) client (eg Digsby) and every time there’s a new error reported the bot will send you an alert.

Because you might get a flurry of messages if you deploy a version and there’s an error, or a 3rd party component has a problem the bot also listens to a set of messages so it’s easy to suspect the alerting (or turn it back on when the problem has been fixed).

At the moment there a few restrictions:

·         alerts have to match a specific URL

·         for a given user all alerts are turned off/on (no per URL granularity)

·         alerting is only available to users who’ve made a donation or promoted jsErrLog

The reason for the first one is a limitation with the way AppEngine lets me query data (unlike SQL the GQL query language does not support the CONTAINS or LIKE verbs)… I’m looking for a solution to that

The second is a feature that I plan to add soon depending on demand.

The third… at the moment it takes a little bit of setup to add new users so I’m adding it as the first freemium feature though this may change. If you want that enabled please let me know the URL you are monitoring and your Google Chat ID and I’ll let you know what else you need to do to enable it…

jsErrLog – now with XML

June 9, 2011

To help analyze data from jsErrLog – my Javascript Error Log service – I added a new feature today: An XML data feed for reports.

You can access a report as normal and view it in the browser, eg the sample report and on there you will now see a direct link to the XML version of the report.

If you know the name of the URL you want to report against then you simple access it via http://jserrlog.appspot.com/report.xml?sn=http://blog.offbeatmammal.com where the parameter after the sn is the URL you want to query.

Image001

Both the report and the XML show up to the last 500 results for the URL. I plan to add limits to the XML feed, and pagination to the HTML report in a future release (let me know in the comments what’s more important, and any other requests). I would like to implement a full OData feed for the data but haven’t found a good Python / App Engine sample out there yet…

One great thing about having the data available as an XML source is that you can add it as a Data Source in Excel and from there filter and sort to your hearts content

Image002

Azure Dynamic Compression

April 9, 2011

On a normal Windows IIS installation it’s pretty easy to turn on dynamic compression for WFC and other served content to reduce the amount of bandwidth you need to consume (important when you are charged by the byte) – you just change the server properties to enable dynamic as well as the more common static compression.

With Windows Azure though it’s a little more interesting because with roles dynamically assigned and started from a standard instance you don’t have much control … unless you’re used to doing everything from the command line …

Luckily one of the nice things that you can do with an Azure role is script actions to take place as part of the initialization. The process is as simple as adding the commands you need to execute to a batch script that gets deployed as part of your project and calling it at the relevant time.

The first thing you script needs to do is to turn dynamic compression on for the server in that role:

·         “%SystemDrive%WindowsSystem32inetsrvappcmd.exe” set config -section:urlCompression /doDynamicCompression:true /commit:apphost

You then want to set the minimum size for files to be compressed (in bytes)

·         “%SystemDrive%WindowsSystem32inetsrvappcmd.exe” set config -section:system.webServer/httpCompression -minFileSizeForComp:50 /commit:apphost

Finally your script should specify the MIME types that you want to enable compression for

·         “%SystemDrive%WindowsSystem32inetsrvappcmd.exe” set config /section:httpCompression /+dynamicTypes.[mimeType='application/xml',enabled='true'] /commit:apphost

·         “%SystemDrive%WindowsSystem32inetsrvappcmd.exe” set config /section:httpCompression /+dynamicTypes.[mimeType='application/atom+xml',enabled='true'] /commit:apphost

·         “%SystemDrive%WindowsSystem32inetsrvappcmd.exe” set config /section:httpCompression /+dynamicTypes.[mimeType='application/json',enabled='true'] /commit:apphost

If you have a problem with MIME types like atom+xml not registering properly you may need to escape the plus sign and replace the string with ‘atom%u002bxml’ – I’ve had success with both methods

You can add as many MIME types as you need to the list, and remember that sometimes you also need to specify the characterset you are using

·         “%SystemDrive%WindowsSystem32inetsrvappcmd.exe” set config /section:httpCompression /+dynamicTypes.[mimeType='application/xml;charset=utf-8',enabled='true'] /commit:apphost

And then when you’re done exit the script to tidy up gracefully

·         exit /b 0

Once you have combined those steps together in a script and saved it as (eg) EnableDynamicCompression.cmd you should add the script to your Visual Studio project and make sure you select “Copy Always” in the properties for the file to ensure it gets correctly deployed.

Finally you need to add a reference to that startup script in your project’s ServiceDefinition.csdef file and then deploy your project as normal.

    <Startup>
        <Task commandLine=”EnableDynamicCompression.cmd” executionContext=”elevated” taskType=”simple”></Task>
    </Startup>

Finally… how do you know if it’s working or not? The thing that tricks people a lot of the time and makes them think it’s broken is that if they are behind a corporate proxy server that often un-compresses the data for you on the way past. You can check yourself using a tool like Fiddler to examine the response and make sure it has been gzipped or you can visit http://www.whatsmyip.org/http_compression/ and test that way (the latter is good if you are behind a proxy which interferes with the compression).


Follow

Get every new post delivered to your Inbox.

Join 565 other followers