This blog has, IMO, some great resources. Unfortunately, some of those resources are becoming less relevant. I'm still blogging, learning tech and helping others...please find me at my new home on http://www.jameschambers.com/.

Thursday, October 28, 2010

Visual Studio 2010 Wishlist – Better Collapsing Region Support

Here it all is, in one picture: everything that could be better with Visual Studio 2010’s collapsing helpers.

image

1) XML Comment Block Collapsing

Visual Studio has had great support for XML commenting for some time, specifically with the trip slash to quickly document existing functions.  Which is why this sucks so bad.

Summary?  Seriously?  I don’t need to know what the first tag is in my comment block.  The IDE already knows to treat these blocks differently (it allows you to collapse them), so why not show me something useful?  Even the first 40 chars followed by a … would be great.  Keep it on a line, that’s why I collapsed it, but let me see what it’s about.

End of rant.

2) and 4) Contiguous Comment Lines

Here I have a series of comments one after each.  I’d like to be able to collapse them.

3) and 5) Language Constructs

This should be a no-brainer.  Ifs, for eaches, trys, fors…they should all be collapsible.

Further, how about supporting SHIFT + CTRL + ‘+’ and ‘-’ to handle this one.  What’s that? You’re in a for each?  No worries, let me collapse that for you quickly while you figure out context, then you can easily expand back out!

That would be sweet.

6) Arbitrary Selection

When I margin-select, or multi-line select any block of code, I would like to see a collapse marker appear in the margin.

imageBut it’s all good…

The truth is that I am so completely fortunate to have the means to work on a big fat 24” monitor and I am not challenged with space. 

Just about, but not quite.

I can still use CTRL+Mouse Wheel to zoom in/out and I do have 3 screens in front of me (one 1900x1200 and two 1280x1024) for real estate.  When things get really tight, vertically, I can always resort to using auto-hide on my error list.  Pshshh!  I don’t have any errors anyways!

I can work through the lack of support for these collapsing features, but I don’t envy the fellah who’s got to work on a smaller screen.  In spite of the level of awesomeness in Visual Studio 2010, I love how many good things must be coming down the road.

Friday, October 8, 2010

Unravelling the Data – Ill-Formatted Data

 

Read the background to this post.

When Bad Data Is Required

Fixing the data in the legacy system was not something that could be done in place.  What I would refer to as ‘bad’ data was in some cases the glue that held reports together and made things like billing work.

This was one of the first things I had to resolve.  My original approach was that I was going to try to “self-heal” the data through a combination of regular expressions, string replacements and templated hints and helpers.  With the sheer number of discrepancies, this approach was DOA, and manual intervention was required.

A Side Order of Data Fixin’

I took a snapshot of the database and added additional columns to the tables where combined data was present.  To understand ‘combined data’ a little background will help.

image At various points in the application lifecycle the management had decided that they weren’t going to use the fields for their original purpose and started using them for a new one.  In other scenarios, they decided to use the fields in one context for some customers and in a different context for other customers. 

Depending on the customer and how long it took employees to shake old habits, these fields were used in differing ways over extended periods of time.  Furthermore, even if there was a clear drawing point, none of the records in the database have a last modified date or any kind of audit log that reveals when a customer record is modified (in a meaningful way).

Thus, my side order approach faced another problem: there was no clear cut of the data and the existing applications needed to keep running.  A snapshot of data today wouldn’t help in the transition 6 months down the road.

The Birth of the Transition Platform

The solution was to create an ASP.NET MVC application, hosted only on the intranet, that used much of my original approaches to identifying bad data, but left the “healing” to an end user.

Where possible, I used jQuery to look up context-based fixes through controller actions and mashed-up some save functionality by POSTing to the legacy ASP pages of the original application.  Where it wasn’t possible (where functionality would be affected by changes to data) I created proxy tables to house the ‘corrected’ version of the data and wrote some monitors to periodically check to make sure that data was up-to-date.

I grouped functionality of the fixes into distinct controllers.  For instance, anything related to a billing address was in the BillingAddressController with actions to support the corrections required for errors related to that piece. The models focused on model-view versions of the “bad data” and I used repositories to not only connect to the legacy system, but also to maintain a worklog of outstanding and completed tasks.

This worked great, as I was also able to say, at any given point, where we were at percentage-wise for correcting any set of data.

This process continues on today, and time is devoted to cleaning data each week.  All three of the legacy systems continue to get (mis)used, though accuracy has been greatly improved.  As users became aware of expected formats they also became more conscience of how they were entering the data into the older software.

This first win made the next steps more plausible.

Next up: Data that Could be Auto-Corrected

Thursday, October 7, 2010

Where are we Taking this Thing?

In a way, I have been a linguist and advocate of literacy for most of my life, but perhaps not as you would expect. 

I started copying programs from books and magazines when I was 4 years old.  I started writing my own code when I was about 7.  As I gained a greater knowledge of computer programming my concern also grew about how others would learn.  As technology has advanced and the topics in computer science become "solved", the underlaying complexities have also grown and I worry that we are raising a generation that will not be equipped to deal with the emerging languages.

imageIn fifth grade I wrote a text-based choose your own adventure game on the Commodore 64 and brought my creation to school.  My classmates could put their own names in and play along, choosing their way through my somewhat limited and unoriginal stories. I stood back in the computer lab and watched as they played; they were facinated!  I remember my teacher, Mr. Pugh, came over and said, "You know, James, most of them won't understand what you've done."

When we wanted to see graphics on the screen as a kid, I set array values mapped to registers in the video memory that would turn a pixel on and off on the screen.  We programmed the hardware. We “mapped bits” and created “bit maps”.

Today, with a single line of code, we can bring a myriad of pixels to life with vibrant color and movement and full-screen HD video streaming across a network we don't even own.  What you tell the computer to do is no longer what the computer is doing: it's doing much, much more and it doesn't require of you a greater understanding.

Here's an excerpt from Douglas Rushkoff's new book, Program or be Programmed:

When human beings acquired language, we learned not just how to listen but how to speak. When we gained literacy, we learned not just how to read but how to write. And as we move into an increasingly digital reality, we must learn not just how to use programs but how to make them.

In the emerging, highly programmed landscape ahead, you will either create the software or you will be the software. It’s really that simple: Program, or be programmed. Choose the former, and you gain access to the control panel of civilization. Choose the latter, and it could be the last real choice you get to make.

I don't necessarily buy into the doomsday duality scenario of zombies and computer programmers, but there is some truth in there and I wonder what it holds as outcomes for humanity and culture.

Wednesday, October 6, 2010

Unravelling the Data – Understanding the Starting Point

So, we’re now into October and the year is passing quickly.  The major function of my employment – helping the organization flip to a new operations platform – is nearing completion.  As well, I have just wrapped up an 11 week series of articles with a publisher that I am very excited to share (but have to wait a little still!).  The articles explain my rarity here on my blog, but I am glad to have some time to invest in this again…especially with the release of the MVC 3 framework!

What I Actually Do

image My current work – at it’s core – is a data conversion project, but don’t let the simplicity of that synopsis fool you. 

The reality is, when it comes to inventory, billing, service and customer management, that when you flip the company’s software the data conversion is the easy part. 

Often, it’s the process changes that can cripple the adoption of a new platform, especially when you’re moving from custom developed software and moving to an off-the-shelf product.  Change can be very hard for some users.

I have the good (ha!) fortune here of working through both data and process transformations.

The Transition Platform

Being the only developer on the project – and in the organization – I do have the pleasure of being able to pick whatever tools I want to work and the backing of a company that pays for those tools for me.

imageIf you’ve ever hit my blog you know that I am a huge fan of the .NET Framework and the ecosystem that you get to be a part of when you develop software within it.  The tools have come so far in the last decade that you would not even believe that the same company made them.

Great progress has been made – albeit at times slower than other vendors in certain areas.  But with Visual Studio 2010 (which I switched to halfway through the project) and the MVC Framework I was literally laughing at how trivial some of the tasks were rendered.

The vertical nature of a development environment and a deployment environment that are designed to work together make things even that much more straightforward.

It is important to note that my development over the last year was not the end to the means.  What I produced was simply a staging platform that would facilitate a nearly-live transition to the target billing and customer management system.  My job, done right, would leave no end-user software in use.

Onto The Problem with the Data

Not all data is a nightmare.  A well-normalized database with referential integrity, proper field-level validation and the like will go a long way to helping you establish a plan of action when trying to make the conversion happen.  Distinct stored procedures coupled with single-purpose, highly-reusable code make for easily comprehended intention.

Sadly, I was not working with any of these.  The reality is that I was faced with the following problems opportunities that I had to develop for:

  • There are over 650,000 records in 400 tables. This is not a problem in and of itself, and it’s not even a large amount of data compared to projects I’ve worked on with 10’s of millions of rows.  It likely wouldn’t be a problem for anyone, unless they had to go through it line by line…
  • I had to go through it line by line.  Sort of.  There were several key problems with the data that required careful analysis to get through like dual-purpose fields, fields that were re-purposed after 4 years of use, null values where keys are expected, orphaned records. 
  • The data conversion couldn’t happen – or begin to happen – until some of the critical issues were resolved.  This meant developing solutions that could identify potentially bad data and providing a way for a user to resolve it.  It also meant waiting for human resources that had the time to do so.
  • The legacy software drove the business processes, then the software was shaped around the business processes that were derived from the software.  This feedback loop lead to non-standard practices and processes that don’t match up with software in the industry (but have otherwise served the company well).
  • Key constraints weren’t enforced, and there were no indexes.  Key names were not consistent.  There were no relationships defined.  Some “relationships” were inferred by breaking apart data and building “keys” on the fly by combining text from different parts of different records (inventory was tied to a customer only by combining data from the customer, the installation work order and properties of the installer, for example).
  • The application was developed in classic ASP and the logic for dealing with the data was stored across hundreds of individual files.  Understanding a seemingly simple procedure was undoubtedly wrapped up in hundreds of lines of script, sometimes in as many as a dozen different files.

Mashing Up Data

The items listed above were all significant challenges in-and-of themselves, but  the reality is that these are just a sample of the problems opportunities, from just one system.  I had three to work with, and all were joined by a single, imageASP script-generated key.  If you just threw up in your mouth a little bit, I forgive you.  I did the same when I saw that, too.

Worse, the key was stored as editable text in all three systems.  Because of a lack of role- and row-level security, someone working their second day at the company could change the key, switch keys between users.  It was a little scary.

And I can’t imagine a manager in the world who likes to hear, “Hey, I’m going to just take three unrelated sets of data, mash them up and let you run your business on it, mmmkay?”  Obviously a better approach was needed.

Now, Here’s How I Got Through

It took over a year, but I am now close enough to the finish line that I could throw a chicken over it.  In post-mortem fashion, I’ll talk about each of the challenges I had to work through, and how I tackled them over the next few posts.

Stay tuned for: Ill-Formatted Data

Thursday, September 16, 2010

Firefox is Really Slow Testing Sites on Localhost

If you are developing web sites and are trying to test on the major browsers you’ll likely notice that IE and Chrome are quick like bacon but Firefox poots along like a hamster on crutches.  This is true when you have an IPv6 IP address on your NIC and you are working on localhost.

It is particularly frustrating when you’re doing Ajax scripting…especially with autocomplete-type functionality.

The Fix

Here’s how to bury it and fuel Firefox to surf your development server at superhighway speeds:

  • Open up Firefox and type about:config in the address bar
    image
  • Promise to be careful…seriously ;o)
    image
  • In the filter input, type ipv
    image
  • Double-click on ipv4OnlyDomains, and type localhost
    image
  • Click OK to confirm changes.

I didn’t have to restart Firefox to get this to work, but YMMV.  Now she should run like a beauty.

Hope this helps!

Thursday, September 9, 2010

Good for a Smile

Been really busy at work and with a couple of side projects (including a writing contract that I will announce shortly). 

A buddy emailed this to me (sorry, I can’t credit the original artist, if you know, please pass it on!) and I thought it was worth sharing.

 

image

This is particularly funny to me because:

a) I do shadow puppets with my kids all the time, and,
b) I’m actually pretty tired.

Wednesday, September 1, 2010

Getting Started in Azure - The PhluffyFotos Sample Application

I initially wanted to create a post that would get someone up-and-running with Windows Azure, start to finish, but the reality is that there is no “simple” app you can create. While you can create a simple data service, or a simple MVC web site, or even walk through the implementation of a simple worker role, there are many parts to an Azure application, each warranting their own series of posts.

However, I found an out: the Azure samples and training site.  I have been around long enough to develop within most of the paradigms required for Azure development, so, with my head as big as it is, I didn’t want to go through a walkthrough that was dozens of pages long.  There is something that is instantly gratifying about having a working app in front of you that you can poke and prod and change and learn to understand.

What this post is not

This is not a comprehensive walkthrough or a hand-holding exercise to get you up-to-speed in Azure development. You will not be a professional Azure developer ready for your first cloud app assignment.

I am not sharing the code as text in this post.  If you’re going to follow along, you’re going to have to have the sample downloaded (the link is inline in the post).  My assumption is that you’re able to navigate your way through a solution and find the code files I’m discussing.

What this post actually is

The first part of this post should take you less than 30 minutes and you will have PhluffyFotos running locally.  The application makes use of Azure data services, workers, and the ASP.NET MVC 2 framework.  It’s billed as fairly 2.0-ish, and has a functional Silverlight 2.0 slideshow viewer to boot.

image

The second part of the post breaks down some of the UX elements and explores some of the tech that powers said UX.

I have a fairly clean install of Window 7 with Visual Studio 2010 running on it, so there should be no surprises if you’re running with a similar setup.  As I progressed through the procedure I have taken notes so that you should be able to replicate my steps.  If you find that I have missed anything, please let me know!

Enable Azure tools through the Visual Studio 2010 Project Templates

Create a new project and select the category of “cloud”.  Walkthrough the download, exit VS and install the tools.  You can obviously omit this step if you’ve already setup Azure tools on your machine.

Download the sample app

This is located here on CodePlex.  Unzip it to it’s default location.

Enable PowerShell Scripts for Scripts Signed by a Trusted Source

To do this, run PowerShell in administrator mode and use the following command:

Set-ExecutionPolicy RemoteSigned

Build PhluffyFotos

Open Visual Studio 2010 in administrator mode, then navigate to and open up the solution in the code directory.  It’s located in the root of the folder you unzipped in the second step.

Do a CTRL+SHIFT+B on the solution to build out the bits.

Close down VS2010.

Provision Local Queues and Storage

This is required so that you do not have to sign up for an Azure account.  At this time, only residents in the USA have the opportunity to create one (even for test purposes) without a credit card.

Open up a command prompt in administrator mode and navigate to your setup/scripts directory in the project folder.  Type the following command and hit ENTER:

provision.cmd

You don’t technically have to run this from a command line, but if you just execute the script without a command line you’ll miss any feedback/error messages should any surface.

Run the App!

Finally, open Visual Studio 2010 with administrative privileges. Open the PhluffyFotos solution and press F5. 

Create yourself an account and start uploading some images.  You can try to navigate around the site, add additional albums and use the slide show.

image

This concludes the first part of this post, and you should be running the app without much fuss.

First Thoughts – Interface and Basics

There’s a lot to set up here, but keep in mind that we’re actually jumping into a project that essentially complete.  You’re required to do all of the things a project would require as you develop it, but you have to do it all at once to get it setup.

The sample application isn’t robust enough to be production quality, but as a sample, that is to be expected.

After uploading a new foto…erm…photo, the image does not seem to appear:

image

However, after navigating back to the same page through the album/photo selection process (or waiting several seconds and pressing refresh), the photo appears:

image

After deeper review, this seems to be as a result of the worker process not finishing the thumbnail generation prior to the site returning the view which requires the image.  I don’t know if this has to do with local performance of the fabric at this point or other variables (such as the 2-second queue sleep in the config).

Another glitch (in my mind) would be that albums without photos are not represented in the interface until you have a picture.  While this makes sense for people who don’t own the albums (you, visiting my profile), for the person who just created the album, it’s a little confusing.

For example, if I create an album called “My Cherished Family Photos”, then navigate to my list of albums, the newly created album is not there.  If you then try to create a new one with the same name (assuming that the “add” didn’t take), the application just throws an exception.

Finally, and without an extremely deep dive, once you’ve selected an album to view, the MasterPage link to Home is broken, only ever returning you to the album cover and not your full list of albums.  Either the link text should be changed, or the link, or both.

Interesting Technical Bits

I was a little worried when I first opened the app as there were several projects in the solution. As a first-stop for Azure exploration, this might be a little intimidating.

image

Let’s have a quick look at the breakdown here, but a little out of order:

  • PhluffyFotos.WebUX
    • Used to render the interface, this is an ASP.NET MVC 2.0 project.  There are some very simple view models, the appropriate controllers and expected views and references to all the other projects, save Worker.
    • There is a third-party component here as well, Vertigo.SlideShow, which is a Silverlight control used to display a slide show of the photos.  It accepts an XML string of photo URLs and renders the images from the selected album on the site.
  • AspProviders
    • These are the ASP.NET role, membership, profile and session state providers that will be used with the application.  This goes back to the beauty of ASP.NET 2.0 – we are still able to use the good bits in ASP.NET like authentication and authorization, but we can sub out our providers to use the back-ends that we like.  In this case, it’s built on the Azure StorageClient and linked to the MVC site in the project with entries in Web.Config.
  • PhluffyFotos.Data & PhluffyFotos.Data.WindowsAzure
    • Data is where the models for the site – Album, Photo and Tag – are defined.  We also have the definition of the IPhotoRepository interface which sets up the contract for creating and manipulating the above models, as well as searching for photos based on tags.  A few other helper methods support some lower-level procedures such as initializing the user’s storage and building the list of photos for the slideshow.
    • Data.WindowsAzure takes the models and Azurifies them, primarily by adding simple wrapper classes that inherit from TableServiceEntity and store the model objects as Azure-compatible rows.  This project also contains the PhotoAlbumDataContext as well as conversion classes for moving the Azure version of the entities back to project models.
  • PhluffyFotos.Worker
    • The WorkerRole class in this project is the implementation of the abstract base RoleEntryPoint class for the Azure worker.  There are OnStart, OnStop and Run methods common to all services.  When run, it will execute one of three predefined commands: CreateThumbnail, CleanupPhoto or CleanupAlbum.  This worker is executed until processing in all queues is completed.
  • PhluffyFotos
    • This is the project that defines the roles that are used in this Azure application: WebUX and Worker.
    • The configuration defines the HTTP endpoints, the maximum number of instances of a role, and any custom settings that may be required for the roles.
    • This is also the default start-up project for the solution.  The magic of Visual Studio uses this project to ensure that the local development fabric is running then deploys the service and launches our web site (the default HTTP endpoint on port 80).  Pretty slick.

The Azure Bits

After you get a good lay of the land, there’s not really too much going on to make it tick.  As I implied, I thought it could be a little intimidating to get an Azure solution running if you needed five projects to make it tick.  With a better understanding, that’s not really the case.

For all intents and purposes, the web UX site behaves as you would expect any MVC 2.0 site to behave. You have an album controller with an Upload view.  When submitted, the photo is pulled from the forms collection and passed to the repository.

image

The Azure part comes in at this point: the photo’s on the way to the storage cloud after adding a new object to the data context.  There are two parts to this, the table entry and the blob storage.  This is great, as we can now explore this and know how to send binary objects to Azure storage. 

Next, the image properties are sent to the queue for further processing by this private method in the repository: 

image

That worker role we saw earlier will now have some work to do with an image waiting to be processed.  This queue behavior is similar to other bits in the project when albums or photos are deleted.

Next Steps

The one thing I would like to spend more time exploring is the provisioning of tables and queues in the cloud.  The PowerShell script for queues is quite straightforward (a single call for each queue) and the table provisioning is only slightly more complex (a call is made to a method in the PhotoAlbumDataContext) but all this is done without explanation in this project. 

Data and queue provisioning is part of the setup you perform to get running, but it’s also critical to the project working.  PowerShell is fine locally, but how do you reproduce this creation in the cloud?

I also want to understand better some of the architectural decisions on this particular project…like, why is deleting a photo an operation that needs to be queued?  Are blob storage operations inherently slow enough to justify this?  Was it simply for demonstration purposes?

Wrapping Up

Creating this post was a great way to contextualize the basic components of an Azure application.  Sure, I’ve created sites, services, models and setup endpoint bindings before, but not in the context of Azure.

I could have joined any team and crafted an MVC site that sits on top of a repository such as the one in the project, and I would never have to know skip-lick-a-beat about Azure – it is that seamlessly integrated into the solution – but I much prefer to get my hands dirty and learn about what’s going on behind the scenes.

I hope this post helps you navigate through the PhluffyFotos sample application and gives you a grounding in Azure-based cloud development.

Thursday, August 5, 2010

Too Many Computers

Many have cited the demise of the PC. Now that tablet X is on the market or that smartphone y has sold millions, it’s only a matter of time before you pitch your PC out the window, right?

I actually believe it’s going to be around for a long time (and that’s not just because I need to get paid).  What we are starting to witness is the transcendence of functionality beyond what our devices were originally used for.

Let’s be honest, here: you don’t need a PC to check email and surf the web.  The way I see it, too many people have computers; the evolution of smart, integrated devices will correct the size of the PC market.

Workstations used to be used for solving problems, not checking your bank balance and surfing Facebook.  I think they are entering a time of homecoming, where people who use computers for their original intent will reclaim the PC and purpose-driven devices will fill in the void for others.

The Need For PCs

Let’s consider a worldwide acceptance of the tenet that PCs are dust.  On International Ditch Your PC Day, everyone throws out their computer in favor of whatever device they are carrying around.  What happens next?

Nothing.  All the applications you have on your device will be all you ever have.  Device development would halt.  There would be no new gaming systems, much less games.  There would be no new prototypes for handheld devices (as many of these start out by taking up several square feet, built off of an emulator on a superfast PC).  Advances in medical research would be crippled.

They can’t all go away, simply for the reason that the bulky little buggers we’ve grown used to on our desktops over the last two decades are needed to advance the things we take with us when we leave our desk.

I don’t mean to discount mainframe computing and virtualized workstations, those will have their place as well, but there are types of folks – myself included – that will need the localized, on-demand power and processing capabilities of a personal workstation.

Cheap Things and Consumerism

This whole situation we’re in is partly a problem with the way we’ve taken everything in our culture to the extreme.  I won’t rant on this too much, but if a 21 year old office clerk can service the debt of an SUV, she does.  If there’s a new computer out that is 8x the speed of your old one with 4x the memory, and it’s only $400, you tend to want to purchase it.  Bigger, better, faster, shiny-er.  That’s how we roll.

So computers fall below $1000 and lots of people start to buy them.  Lots too many, actually.  Then reasonable PCs come in at $750 and you’re shopping with Grams so she can get pictures emailed to her of your new kids.  Then $500.  Now $400, monitor included.  Laptops at $379, netbooks running at $289.  Everyone getting something.

The Balance

We won’t ever see PCs abolished from planet Earth, but they will start to sell less and less.  Prices will even adjust to reflect demand, and we can expect to see PCs priced back up around $1000 (in today’s dollars, that is, not just due to inflation).

This will happen because Grams doesn’t need a laptop to see pictures, she needs a digital picture frame.  Your dad doesn’t need a PC to surf the web, just a small touch tablet.

Desktops will not go away, and their end is not nigh, but I suspect they have peaked in popularity.  They will morph, take new shapes, miniaturize and maybe even become a remote device to our desktop (which has only inputs and a display of some kind).

For anyone who actually uses computers, they will be here for a long time to come.

Wednesday, August 4, 2010

2 Media Families are Expected

I’ve just got a couple of 1TB drives (WD Caviar Blacks…woohoo!) as my new boot stripe and I’m getting prepared to start builing my new workstation.

I stage many of my spike solutions on this machine, so I need to backup my web sites and databases that are used to show these snippets of functionality off and allow users to play-test them.  There are usually 3-4 of these active at any given time.

Through SSMS I tried to backup one of the databases and got the 2 Media Families error:

The media loaded on "c:\path\database.bak" is formatted to support 1 media families, but 2 media families are expected according to the backup device specification.

I’ve run into this before, so I had a good inkling that there was a bad backup set.  This isn’t good news if you’re in a production environment, but it’s not so bad in my case.

While it’s not “fixing” the issue, you can get around this error by performing the following steps:

  1. Remove the existing/offending backup locations from the list.  You can do this in the backup task window or by generating a backup script and removing the DISK="your_bad_path" from the command.
  2. Specify a new backup filename in a pre-existing folder.  If the folder doesn’t exist you’ll get an OS3 error with no text, but it just means you’re trying to write to a directory that doesn’t exist.

The NOFORMAT, NOINIT, NOREWIND texts come from tape backup origins and the documentation is not really clear on how they affect system drives.  For example, the MSDN documentation suggests that using FORMAT will render the media set useless.  So what does that do to my C:\ drive?  FORMAT will rewrite the headers for instances where there are different media family counts (whereas INIT will not).

So, I maybe don’t want to use FORMAT?  I don’t have a striped backup set of tapes, so I don’t think this matters.

At any rate, I don’t need whatever corrupt backup file was on my disk, so I changed my destination, removed the bad file from the backup list and carried on.

Wednesday, July 28, 2010

Visual Studio Wish List - IntelliThings

As I continue my daily use of Visual Studio 2010 I find more and more treasures in the IDE.  For instance, did you know that CDN-hosted JavaScript files with corresponding VSDOC files give you IntelliSense?

But I’ve been thinking about some things I’d like to see, as well as a few things I’d like fixed.

Intellisense Doesn’t Work Inside Script Blocks

To be clear on this one, if you are editing inside a script block and try to escape to a server-side tag you will not get CLR object IntelliSense.

For example, I don’t get IntelliSense here:

image

or here:

image

This one is likely a challenge, though, because now the IDE has to sort out if you are wanting JavaScript IntelliSense or if you want it for the CLR types.  Still, would be nice to have.

Let’s just get IntelliFix in there already, mmmkay?

When there are simple or even semi-complex errors that are common enough that a robot should be able to fix them…let the robot fix them.

Consider a missing semi-colon:

image

I would like to have an actionable interface here.  When I hover over the error, double-clicking should (and does) take me to the line.  Double-clicking again should fix it.  Voila, IntelliFix!

How about other scenarios?  A method that doesn’t return the expected type?  IntelliFix inserts a NotImplementedException for you (bypassing the compiler error).  Member has the wrong level of access for use?  Widen the scope to internal (or public, whatever satisfies the error).  Missing a using statement?  IntelliFix it.

How about IntelliTagTips?

Okay, I totally ‘shopped this one wrong (pay no mind to the types of brackets or the misspelled function) but the idea is what I’m after.

image

Here’s the concept.  If I pause, cursor (and eyes) blinking, at some point in a bunch of brackets, a little tool tip should fade in to show me where I’m at.

I don’t think it should do this for just one (you could use IntelliFix for that!), but two or more would be handy.  It would be a lot easier than trying to go back and count braces and brackets.

Normally I’m pretty good about keeping track as I write and nest code, so this might be a little annoying if it pops up too aggressively.  It would come in most handy when I’m: a) interrupted, b) tired, and c) fixing someone else’s code.

Yeah, the Names Suck…

…but I think these ideas stand up fairly well and Microsoft has teams of folks in marketing.  Therefore, I have added them to my wish list for Visual Studio versions of the future and hope to see IntelliThings there down the road.

Now, don’t mind me, I have to go geek out over IntelliTrace for a while…

Dear Microsoft: Give Us Zune

image North of the border we’ve gotten the shaft: Microsoft hasn’t released the Zune HD, nor the Zune pass service to Canada.  The implications of the licensing issues, by the way, extend to the Xbox 360 where we Canucks have not been able to access the content library our brothers in America have been entitled to.

And now our motherland – home of our Queen – is getting Zune pass too.  But, here in Canada, nada.

Heck, I even won a contest on CodeProject, the prize for which was a Zune HD, and they had to send me prize money instead because they couldn’t ship the device to Canada!

I want to make this clear

Not everyone in Canada is an Apple fanboi.  Seriously.

We have three media players in my house.  Two iThings and a Creative Labs MuVo.  I actually prefer the MuVo.  Sorry, five media players, because we have a laptop and a desktop PC.  Oops, make that six, because we have an Xbox 360.

I want to have media that can play on all these devices.  Seemlessly.  I don’t want to have to burn CDs, rip them and then re-title all the tracks I have.

I buy my music.  All of it.  I pay for my movies.  We even rent them on the Xbox 360.  I have a Gold Xbox Live membership.  I want my content on demand, I want a library that can grow and change as my preference does.  I want to be able to listen to the music I want to listen to on the devices I want to listen to them on.

I want to listen to it on a Zune HD.

If the Zune subscription service was available in Canada I would be the first in line to sign up (Microsoft readers: you can contact me directly for my credit card number).  I would even organize an iPod burning party and offer a free service to uninstall the pathetic version of iTunes that Apple has crapped onto the plates of PC users for all my friends and family.

Everyone is doing it

Netflix has announced that it is bringing its service to Canada.  I, for one, hope this is a resounding success story for them and a solid earning opportunity.  Why?  Because I want Canada to lose the stigma of being a bunch of media pirates.

If there are successes with Netflix, hopefully this will open the door for Microsoft.  While that would be great, I wish they didn’t need to wait.

We will pay our dues

I promise.

We believe in artists.  We enjoy the content.  Heck, we have a whole media company producing television and radio programming that is run by taxpayers just to prove it!

Because we believe and we enjoy we will also compensate.  Just give us a chance.  And a change!

Start with me!

So, Microsoft folks, I am offering my help and assistance.  I will show all my friends.  I will help my family’s iPods go ‘missing’ and encourage them to replace them with Zune HDs. 

I will stand in front of the Apple kiosk in Walmart and show everyone who is stopping by for an iPod how cool the freekin’ Zune HD is.  I will send emails out to old class mates and post about it on Facebook.  I will Tweet about the awesomeness of the player.

Just send me one.

Heck, send me five and I’ll find homes for them…all in different markets…and I will help get the word out that there is more than one game in town.

I was this close to having one before getting the carpet pulled out from under me.  So I ask, one more time, please make them available north of 49.

Monday, July 19, 2010

Installing Windows Server 2008 R2 on VMWare ESX Server 4.0.0

I’m not sure why this is, why this works, or if you’ll run into it, but I had some troubles with installing Win Server 2K8 R2 on our ESX server.

We have an ISO that is stored on our data store.  We typically select this in the Device Type selection to install an OS.  I selected the Windows Server 2008 R2 (multi-edition) ISO that I have and started the VM.  I got the language selection screen, clicked on ‘Install Now’ and then the fun ensued:

image

A requried CD/DVD drive device driver is missing. If you have a driver floppy disk, CD, DVD or USB flash drive, please insert it now.

Note: If the Windows installation media is in the CD/DVD driver, you can safely remove it for this step.

What?  Floppy disks?  Those are so 1998 (the year before I stopped using floppies.  I digress…).

So, to get past this, I tried to burn a DVD of the ISO to use that for the installation.  In Summary –> Edit Settings I set the Device Type of the DVD drive to “Host Device”.  I rebooted the VM and ended up back in the same spot as above.

With the above screen still displaying, I returned to the settings page for the DVD and switched it back to the Datastore ISO image.  Et, voila! When I switched back to the console of the VM I was past the driver error!

image

I initially went down the route of trying to find a driver for VMWare’s DVD drive in Win 2K8, but this 8 minute fix seems to work better.  7.5 minutes to burn the ISO, and 30 seconds to reboot and switch the selection.

A few acceptances of license agreements later, and blamo!

image

Good luck with yours.

Apple and MSFT Share the Lead…erm…Switch Places

I just hit Google Finance where the market cap of stocks I follow are listed.  I was surprised to see the following:

image

Apple has seemed to have slid through the bad press around the iPhone 4, the free bumper announcements and the privacy slips.

UPDATE I just hit refresh and saw the following:

image

Looks like Microsoft has taken the lead.  Should be an interesting couple of months while Apple tries to bleach their reputation back to stainless and Microsoft tries to play catch-up with the release of Windows Phone 7.

It’s probably no surprise that in the phone game I’m rooting for WP7 ;o)

Tuesday, June 22, 2010

My Development History

Based on an interesting post with a link to styled résumés I got to wondering what my development history would look like if I plotted it out.

I pulled out my résumé and dumped my language experience into Excel.  While I have exposure to many technologies (Crystal Reports, ActiveReports, PHP, third party SDKs, IIS, Apache, Exchange, Squirrel, DNS), platforms (Windows, Mac, flavours of Linux), databases (Oracle, MySql) and Languages (Java, c++, Delphi) I found that it was most clear if I charted the top two languages at any time.

Excel 2010 quickly whipped up a series of charts for me.  I ‘capped them quickly and spent a couple of minutes styling it up in PhotoShop. It’s crude because I only spent about 20 minutes on it.

I tried to reflect that most languages built up as a primary, but in late 2005 I switched projects, started using C# and haven’t touched VB.NET much at all ever since (the exception being maintenance).

image (click chart to see a larger version)

As an overlay representing my time in SQL Server I’ve added a black shaded bar.  My responsibilities in SQL Server admin, stored proc development and the link have continued to grow and advance in complexity since 2000.  Today, however, where I spend a lot of time in SQL development is in the import/export scripts as we transition from the legacy system.

Below the timeline I added the language that I had been using the second most often.  After doing this it occurred to me that C# has graduated to my longest running language and will be, by next year, my longest running primary language as well.  Not counting T-SQL, of course.

At some point I would love to add to the chart my primary development environment as well.  …but that would take a bit of time and my break is over ;o).

What is not reflected on the chart is that prior to 1997 my primary development was non-MS based as I used Turbo Pascal and Delphi.  Loved the DOS, baby.

Monday, June 21, 2010

Visual Studio 2010 – My Thank-You Gift

I was pleased to get a package in the mail today with a token of appreciation from Microsoft.

image

It is a glass paperweight that came in a small cardboard box with the Visual Studio styling.  Even the box was pretty cool.  I like boxes. ;o)

It reads:

Microsoft Visual Studio 2010

Thank you for the lasting contribution you have made to Microsoft Visual Studio.

S. Somasegar

I participated in several pre-release builds, submitted defect reports, helped run diagnostic analysis for performance-related issues and gave feedback on various Microsoftie blogs.  And, you might say that I weighed in a little on this blog, too.

Thursday, June 17, 2010

Cannot Start Outlook. Cannot open the Outlook Window (Office 2010)

After fighting through the un-installation of the Technical Preview of Office 2010 and it’s related components I was unable to open Outlook.  I only get this far, then it errors out:

image

The first problem I actually ran into was that I couldn’t install Office 2010.  This was because I had a botched un-install that left several little bits of Office lying around.  I couldn’t remove Visio 14 and had to run Windows Installer Clean Up to get it off the system.  Send a Smile was also busted.  After cleaning off those bits and getting the suite finally installed, I was getting the following message when I tried to run Outlook 2010:

Cannot Start Outlook. Cannot open the Outlook Window.

I am running Office 2010 Professional Plus x64 on Windows 7 x64.

I ran through a number of steps to try to resolve the problem, including the following:

    image

  1. I used the recommended approach where you Start –> Run –> outlook.exe /resetnavpane to try to fix some settings issues. This seems to work for a lot of folks.
  2. I checked the location of my PST files in Control Panel –> Mail to make sure that Outlook was looking in the right spot, i.e., my old PST file.  It was.
  3. I tried to correct a corrupt PST/OST file using ScanPST.exe (included with Office).  This has worked for folks, and I was very optimistic that it would work for me too as it corrected over 5,000 errors, but it did not.
  4. I then tried removing the old profile from my system, and recreating it, which left me with an unknown error (0x80070057) and unable to connect to Exchange altogether.image
  5. So, now, we’re back to uninstalling Office 2010 and every scrap bit we can find of it.  I’m going to pull down Office 2010 32bit and see if that helps re-establish my sanity.

image

    UPDATE: The 32bit version faired no better.  I’m going to uninstall and start removing SDKs and WinMo features/sync just in case something’s tripped with Office Connectivity.  I’ve removed the sync from my mobile device and unplugged my phone.  I’ve also un-installed any apps that at any point in time may have run at the same time as Office.  I don’t know why I did that, but I’m getting desperate.

    UPDATE 2: I’ve uninstalled the 32bit and will now try the 64 again. I’m going to see if I can prove Einstein’s definition of insanity.

    UPDATE 3: This is getting looney.  I’ve started this episode of debauchery by mutilating my registry and stripping out any folder in any kind of ‘data’ or ‘program files’ sort of directory.  Gonzo.  64bit uninstalled again and registry cleaned.  Folders nuked.  Profile zapped.  Here we go again…this time I will accept the defaults on the install and let it install the things I’ll never use (like Access).

    image

    UPDATE 4: Now the install is failing, saying that I don’t have permission to modify keys in the registry.  Per instructions here, I’m off to do something to hundreds of thousands of registry entries…

    UPDATE 5: The nonsense continues. I am now hunting for registry keys by hand to try and grant myself permissions.  I have booted my computer in ‘clean boot’ mode so nothing else is running in the background, but I don’t think that was every the problem

    UPDATE 6 Next day: There are (so far) 10 keys and their subkeys that I have found that do not have any owners (and therefore the permissions are null).  I have to find each of the keys, one-by-one, after I get an error message from the installer as above.  The installer give me only one key at a time, then I have to go hunt for the offender.

    image

    Then, drill into the permissions and owners for each of the parent and child keys.  As I have to wait for the registry key to error out from the Office 2010 installer, it can take up to 5 minutes before I know there’s an error, then another minute or two to assign the local administrators group full access and ownership.

    As I write this, number nine just came up (but at least the installer made it to 60% this time)…

    UPDATE 7: Another key, but we’re getting close to the 2/3 mark now…

    image

    As a backup plan I’m pulling down a copy of DOS 6.22 and Word 1.0 from MSDN Subscriber Benefits.

    UPDATE #You’ve GOT to be Joking:

    image

    So close…and yet, not so close.  I could almost taste PowerPoint…

    UPDATE 9:

    image

    Finally!!!

    UPDATE 10:

    ….aaaaaaand we’re back where we started.  *sigh*

    It looks like everything else seems to be working (Excel, PowerPoint, Word) but I’m getting the “Cannot open the Outlook window.” message again.

    image

    I’ll be in this weekend to format this beast and start over.

    Monday, June 7, 2010

    Bing Maps Get Its Cool On

    The new thing in maps is here. It’s not perfect, but it’s still a great start and, I believe, a bit of innovation on the part of the world’s most criticized software company.

    Destination Maps

    image

    A short wizard walks you through a simple process where you select a location (via Bing Maps search), identify a region of the map and then select a style to present the map.

    The result is a very clever, “accurate-enough” map of the selected area with an artistic representation of the major routes and landmarks.

    For instance, you can select “sketch” or “treasure map” styling in the last step.

    The whole process reduces an otherwise hard-to-read and at times confusing view of a city or area into a simple-to-navigate approximation of the same.

    Shortcomings

    There are two major things that would prevent me from using the tool for something like planning an event:

    1. Users who are not technically savvy cannot share the map easily. While there is a share button, it only generates a link to a file that’s uploaded. I would prefer to see Twitter/Facebook integration or similar. Planning an event on Facebook and using this as the map would be awesome.
    2. There is not enough detail in the generated view to use the service for something like event planning. The maps could help you get to a part of the city, but they can’t get you around a neighbourhood. This may be by design as there appears to be quite a bit of computational work involved in determining and simplifying the maps. This limitation could help them control processing costs and keep the tool usable.

    What I’d Like to See

    Firstly, I think there needs to be controls for levels of detail. In my hometown we mark everything by the two major bridges and the river. These don’t even appear on the map. In fact, in a town of 40,000 people, we all appear to live on one of three streets.

    Perhaps this could be resolved by adding either smarts to determine the level of zoom of a selection or a slider that controls the level of detail. Maybe even separate sliders for landmarks and street details would be a good idea. I think it’s perfectly fine to have the current setting as a default, but I want to be able to help my friends get to a dinner party in a confusing end of town.

    Secondly, I’d like to get a route marked out. I want to show everyone at the wedding how to get to the reception afterwards! By allowing me to mark out start and end points – even selecting from pre-determined markers – combined with a slightly better level of detail this next-step in online mapping would be…well, it would be fun to plan events!

    Well, fun that is if you’re into event planning. ;o)

    Lastly, some performance improvements would be nice as for wider selections it does take a bit of time. Again, I could see myself using this more as an ‘end-of-destination’ tool: my friends know how to get to the city, not the neighbourhood.

    Research Makes Cool Things

    This is one of the coolest v1.0 features in online mapping I’ve seen in a long time. I got just about as giddy using this as I did the first time I got Google Maps running.

    I hope they take this to the next level; definitely a home run, but three runners short of a grand slam.

    Thursday, June 3, 2010

    A “Super” Month

    Hello, blog readers!

    As some of you know my oldest son lives with Type 1 Diabetes. Here is a video my boys and I made to help raise funds for researching a cure:

    The video is called "Superhero." and is about the brother of a kid who lives with Juvenile Diabetes.

    To donate, please follow this link:

    DonorDrive web site.

    Wednesday, May 19, 2010

    Application Whitelisting as Malware Defence

    I just finished reading an article on TechRepulic’s IT Security Blog which echoes my long-standing recommendation to maintain a “whitelist” for approved applications.

    Michal Kassner, the author of the article, explains briefly the benifits that whitelisting could bring.  I want to talk a little more about an actual implementation.

    The difference between approaches

    Blacklisting and whitelisting are common in many IT scenarios. We use it for mail servers, server-to-server communication and even internet traffic in some proxy server implementations.

    Blacklisting is definitely the more relaxed version of the two: either you or third-party you trust maintains a list of domains, hosts or addresses that are not trusted.  This list is usually tied to software of some kind that prevents users from accessing or receiving data or messages from entries on the list.

    Whitelisting leans towards paranoia.  I’m not saying it’s necessarily bad, but it can certainly be restrictive.  Using the same approach of tying the list to software to protect users, only addresses on the list are allowed to be accessed or permitted into a system.

    Depending on the level of security required there can be a mix of the two working together to protect users.

    When to use which

    We can’t just switch everything over to whitelists and expect the internet to hum along all tickity-boo (without problems).  Imagine your friend registers his own custom domain and sends you an email (from that domain) to check out his new site (on that domain).  First of all, you wouldn’t get the email.  Secondly, you couldn’t access the site.

    So, generally speaking, we should use blacklists where we need wide unannounced access to resources that may be abused.  When and only if they are, the switch is thrown and the address is locked out.

    Some examples of where using a blacklist can help a user without hindering their experience:

    • Email servers pumping out spam
    • Web servers serving malicious content

    And, some examples of where you want ‘whitelist’ behaviour implemented for better security:

    • Servers that only expect connections from certain IP addresses
    • Configuration for your remote desktop
    • Secure VPN access points (when the access points are fixed, such as office-to-office)

    Personally, here’s a few other ways that I use these kinds of lists:

    • RDP connections on my servers whitelist a central IP for access and by default block/ignore other connection attempts. These servers are on private IP addresses.  The central IP has external RDP sessions routed in on a specific port and the router is configured for only a handful of IP addresses.
    • My children have whitelist-only access to the internet.  At age 3 they each got an account on the computer.  My wife and I approve only websites that we preview (lego.com for example).  We have google.com on there as well, which allows them to search, but if they want to access a search result they need our approval.

    How applications fit in the mix

    I think it’s important to recognize that we can’t anticipate every need of users in the IT space.  The job descriptions of people who use computers has grown so diverse that you are equally likely to find a chef who uses a computer every day as you are a computer programmer.

    So can you rely solely on the work of a handful of people to approve applications?  What if one evaluator thinks that an application is malicious because of how it tracks your usage, whereas another finds it useful because it alters the behaviour of the application?

    Who comes up with the criteria for whitelisting?  Who approves applications?  Who blacklists them?

    Does the issue need to be a dichotomy?

    Some of the above questions, to me, suggest that there should be “greylists” as well.  Using heuristics to evaluate software, as submitted by developers, greylists would allow applications to have a level of trust associated with them.  These apps could be sandboxed to protect users, and OS-level alerts could monitor these applications for excessive or abusive behaviours.

    A case for whitelisting applications

    I have worked with a good number of people who see a screen saver they like, or backgrounds, or icons, or mouse pointers and more recently email graphics and templates, and download and install these…treasures.  I have also seen an entire network compromised in an afternoon with zero-day malware hidden in a toolbar install.

    Corporate networks should be configured to prevent the unregulated installation of software. Even as a software developer who likes to download and try out apps all the time, I try to only do so in a sandbox (virtual machine) unless I trust the application provider.

    Corporate machines do not need anything installed on them except the software that enables an employee to do their job.  Mechanics don’t get waterslides near their workstations, likewise, we don’t need users installing Sudoku Extreme.

    For home users it’s a little more difficult to lock down computers and I don’t feel as though they should be.  I would, however, like to see the implementation of whitelist providers, coupled with a local service that I can maintain.  This service should allow me to say “these users accounts can install these applications that are suitable for kids”.  If it’s not on the list I can add it to the list (as an administrator on my machine).  I could equally say, “these users can use these whitelists, and these users can use these greylists”.

    Take it even further, now, and implement a system whereby the operating system alerts me when I’m launching a greylisted application whose signature has changed (suggesting the possibility of malware).  Give me the option to restrict file or network access to greylisted applications, or limit their access based on user type.

    In a corporate scenario, you could allow management-approved applications on the workstations, and greylisted apps on virtual machines (where users have them).

    Making it work for users

    Ultimately what we’re doing now is broken.  Chasing malware that can move around the globe in hours is nearly a lost cause.  We haven’t seen any big outbreaks lately – I credit smarter users and more responsible operating system behaviour – there could be one coming.

    The implementation must not break anything we’re already able to do, and yet provide more security than we currently have.

    That’s a tall order.

    Tuesday, May 18, 2010

    Blog FTP Troubles

    I’m working through an issue with publishing images right now and hope to be back online to continue my jQuery series in the next few days…

    Thanks for the continued questions and comments as I work through all kinds of scenarios with ASP.NET MVC 2 and jQuery.

    Tuesday, May 4, 2010

    ASP.NET MVC and jQuery Part 4 – Advanced Model Binding

    This is the fourth post in a series about jQuery and ASP.NET MVC 2 in Visual Studio 2010.

    image In my last post I covered the simple scenario of a Person object getting POSTed to an MVC controller using jQuery.  In this article, I’m going to look at three other, more complex examples of real-world model binding and where jQuery might fit in the mix.

    The model binding scenarios I’ll cover are:

    • Sorting a list with jQuery UI support and submitting IDs to the controller for processing.
    • Using jQuery to augment the user experience on a list of checkboxes (supporting check all/none) and allowing MVC to handle the elegance on it’s own.
    • Give users the ability to build a list of items on a complex object, then submit that object to an MVC controller for processing.

    The entire working solution is available for download at the end of the file.

    Preface

    In the first two examples here I have lists of data that are statically programmed into the page.  From my earlier posts in the series you can see how easy it is to generate partial views that would drive the UI elements from a database or other backend.  For brevity, I’m leaving those elements out of this example.

    Sortable Lists

    imageThe jQuery UI library provides the sortable() function which transforms the children of a selected element in the DOM to draggable, orderable items for the user to manipulate.

    The sortable() extension also provides a mechanism for us to capture the order of the list via the toArray method.

    Using the jQuery.ajax() method, we submit the results of the users’ efforts via post to a controller action.  For the purpose of this post, I’m not going to do any processing in the controller, but you can set breakpoints to see the data getting hydrated into the models.

    Let’s start by setting up the controller action, which accepts a list of ints as a parameter to capture the order of the IDs.

    image

    Yes. It’s that simple.  Now, you’d likely want to do some processing, perhaps even save the order to a database, but this is all we need to catch the data jQuery.ajax() will throw at us.

    The juicy bits in jQuery are as follows:

    image

    Note: In order to make the results of the array compatible with the binding mechanism in ASP.NET MVC (as at MVC 2.0) we need to use the ‘traditional’ setting in $.ajax().

    There are a couple of interesting things to note here:

    • jQuery.ajax() by default makes requests to the current URL via get.  My controller action that accepts the List<int> is the same name as the basic view.  I change the type here to POST and the proper controller action is called.
    • The ‘toArray’ returns an array of the selected element IDs.
    • My unordered list contains list items with IDs that represent the unique items.  In this case, they are integers stored as strings (by nature of HTML).
    • The name of the list of IDs is passed in as the same name as the parameter in the controller action.
    • When submitted to the controller, the MVC framework finds the appropriate method, looks at the expected parameter type and sees the array sent by the client.  It then uses reflection and parsing (and maybe some voodoo) to coerce the values into the expected parameter type.

    image

    We can then use the list of IDs as required in the comfort of c#.

    One of the interesting things that you can do, as the List<int> parameter is an IEnumerable, is that you can use LINQ to Objects on these guys without any effort.  Thanks MVC Framework!

    Submitting Checkboxes

    image How do you submit a list of the selected checkboxes back to an ASP.NET MVC controller action?  It’s all too simple.  In fact, the only reason I mention it here is to highlight some of the simplicity we inherit when we use the MVC Framework.

    I actually laughed out loud when I figured this one out.  It’s that good (or, I’m that easily impressed).

    For jQuery on this example, I’m only really going to use it to augment the user experience by providing a couple of buttons to check all or check none of the options.

    We’ll use a standard HTML form and allow the user to select the items in a list they feel are appropriate.  The form will be submitted via POST to our controller action (named the same as the ActionResult for the original View) and our parameter will be automatically populated for us.

    image

    Some things to point out at this junction:

    • The values on the checkboxes here are the same strings that are displayed in the labels.
    • I have given all the checkboxes the same name. When submitted, MVC sees these as some kind of enumerable thing and will then try to bind based on that.
    • Optimus Prime is not a Jedi.
    • This the name used for the checkboxes is the same name as the parameter in the controller action.

    Packing a Complex Model

    image What about if you have a complex type with properties that can’t be expressed with simple HTML form fields? What if there are enumerable types as properties on the object? 

    Man, am I glad you asked! The ASP.NET MVC Framework is pretty smart about taking items off the client Request and building up your objects for you in the hydration process.

    Here is my Suitcase class, with a List<string> that will contain all of the things that someone wants to take along on their vacation.

    image

    So how do we get those items into the object?  The first step is to allow users to create them.  We do this with a simple textbox and a button, rigged up to some jQuery script as follows:

    imageWhen the user enters an item and clicks ‘Add to suitcase’ (or hits enter) we grab a reference to the textbox.  Next, we use jQuery.append() to create a new LI element with the contents of the textbox.  Finally, we clear out the value and return focus to the input field.

    When the user is finished loaded up their bags, we need to create a data map that will be submitted.  To simplify the process a little, we’ll first get that list of clothes together.

    image

    We first create an empty array.  Next we use jQuery.each() to loop through all the returned elements – the list of LI elements that the user has created – and add the text of those LIs to the array.

    Next, we POST the data back to the server:

    image

    Here are some observations:

    • We’re POSTing and using the traditional setting so that the enumerated items are compatible with the current versions of jQuery and MVC.
    • The names of the properties in the Suitcase class are the names of the values we use in the data map submitted by jQuery.ajax().
    • As in the first example, jQuery.ajax() is posting to the default URL, which is the same URL as the view in this case.  In the controller we differentiate the action with the HttpPost attribute and, of course, the Suitcase parameter.

    When the data is submitted we see this in the controller action breakpoint:

    image

    And the contents of the Suitcase.Clothes property:

    image

    Wrapping Up

    There you have it: the basics of advanced…stuff. 

    From here you should be able to work out most scenarios when building up objects in jQuery, submitting them to the controller and making a jazzier UI come together with jQuery UI while still using MVC in the backend.

    Some things I’ve learned along the way:

    • Remember to watch the names of your variables in data maps! They have to match the parameters (or member properties) of the controller action.
    • If you’re having trouble getting things to submit and you’re not seeing any errors, try attaching FireBug in FireFox to the browsing session and see what’s happening to your requests/responses.
    • Make sure that you’re sending the values of the jQuery selections, and not the objects themselves if you’re having trouble binding.
      • Don’t send: $(“#my-textbox”)
      • Send: $(“#my-textbox”).val()

    Resources