This blog has, IMO, some great resources. Unfortunately, some of those resources are becoming less relevant. I'm still blogging, learning tech and helping others...please find me at my new home on http://www.jameschambers.com/.

Thursday, April 30, 2009

Immediately Useful - BB>187

There is a part of a web admin console that we have that displays a MAC address of a device, which our network admin routinely punches in value by value into Calculator and converts it to a decimal format:

  AC:0F:BB:EF:01:CE

…and that becomes:

172.15.187.239.1.206

He has to do that because there is another utility that expects the decimal format of the MAC address.

AND, this has to be done each time we add a new device/customer/end point on our network.

I am using c# to convert the MAC address to decimal in this example.  .Net provides some quick-and-easy ways to convert from hex to decimal and this is ideal when converting a MAC address.

There are a number of approaches on how to solve this kind of issue, but I chose one that doesn’t require (much of) a user interface.  The only visible aspect of the program is a task tray icon that, when double-clicked, converts the contents of the clipboard from hex to decimal.

I chose this approach because, for the most part, we’re dealing with one (or few) of these conversions at a time.  He can copy, double-click and paste into the other app without having to introduce a third, windowed interface into the mix.

Here’s the meat of the method that does the work:

 

try
{
// grab whatever's on the clipboard
// and try to break it apart
string input = Clipboard.GetText();
string[] parts = input.Split(':');

// convert the data to integers
// and build a string
StringBuilder sb = new StringBuilder();
foreach (string part in parts)
{
sb.Append(int.Parse(part, NumberStyles.AllowHexSpecifier));
sb.Append(".");
}

// need to drop that last . from the
// string and set the clipboard
string result = sb.ToString();
Clipboard.SetText(result.Substring(0, result.Length - 1));
}
catch (Exception)
{
// nothing fancy here, anything goes
// wrong and we bail...
Clipboard.SetText("Hrm...Bad data...");
}



Put that into a double-click event handler for a NotifyIcon and you’re sailing.

A Truly Random Number in MS SQL

Without further adieu:

select
convert(int, 1 + 400 * RAND(CHECKSUM(NEWID())))

So, if you’re wondering what’s going on there, I’ll break it down.

First of all, we can’t make use of the RAND function on its own.  When called repeatedly in quick succession it will actually produce the same number.  If you give it a seed as a parameter, it uses that same seed (and therefore generates the same “random” number) for each row that is returned.



The only method that I am aware of in SQL that generates something unique is the NEWID method, but this isn’t the type of data we want as it produces a GUID suitable for unique identification.

BUT!…if we wrap NEWID with CHECKSUM we get an integer, a random integer, to work from.

Unfortunately, this isn’t exactly what we need either, as we have no control over the range of values produced.

HOWEVER!…we can reintroduce RAND to the equation and get us a float between 0.000 and 1.000.  Nice.  Multiply this by the range you want, then add 1 (because of the way RAND works, it will never generate a 1.000 so you’ll never get the max value in your range).

Basically, the above code will generate a random number between 1 and 400 with really good distribution.  I was able to achieve 99.9% unique on 10,000 samples with 100,000 range.

Change the 400 to whatever you like and away you go.

Visualization – Where all the Cool Data Hangs Out

One of the things that I struggle with is convincing users that they don’t just want to see a list.  We’ve trained them to not believe that (nearly all useful data has been presented as lists for decades).  I posted on that here a few weeks ago.

I showed a couple people yesterday some of the cool things that can be achieved if you use some of the technologies that are out there, primarily Silverlight and DeepZoom.

There is a brief but pointed commentary from one of the folks involved in MIX – a conference that I’m hoping to attend next year.  If you’re interested, be sure to check out the Descry project for some samples.

So yeah…

That’s a no on the nightly jobs for that server.  More digging warranted and likely in the next few days…

Wednesday, April 29, 2009

Mixed Messages

I started into creating some jobs today that would ensure synchronicity between a couple of databases and ran into a problem getting the SQL Server Agent service running on one of the machines.

Normally this is trivial to resolve, you start the service or change some permissions for the login that you’re using and away you go.

I saw a series of errors that lead me down those paths, but unfortunately neither resolved the issue.  To further complicate things I (believe I) have created a couple of jobs that I don’t know are running or not.  Mind you, I’ll know tomorrow as I’ve run TRUNCATE against a table that should be populated if it is ;o)

What it came down to (after a series of wasted efforts) is the following message:

The edition of SQL Server that installed this service does not support SQL Server Agent.

Also, I was digging around a bit and found the following error message in SSIS:

The SQL Server Execute Package Utility requires Integration Services to be installed by one of these editions of SQL Server 2008: Standard, Enterprise, Developer, or Evaluation. To install Integration Services, run SQL Server Setup and select Integration Services.

No fix yet, but I suspect a botched install and will try to figure out what happened tomorrow.

Tuesday, April 28, 2009

Ping! You’re It!

Here’s a quick little bit of code that will allow you to scan a subnet for available IP addresses (provided no one is blocking ICMP traffic).

You’ll need to add a reference to System.Windows.Forms to allow access to the clipboard (and mark your entry method with [STAThread].

The formatting’s terrible, but I just copy & pasted and it worked…

namespace Spike.IMCP
{
class Program
{
[STAThread]
static void Main(string[] args)
{
Ping pingSender = new Ping();
List<string> available = new List<string>();
List<string> unavailable = new List<string>();

for (int i = 1; i < 255; i++)
{
string address = string.Format("192.168.0.{0}", i);
PingReply reply = pingSender.Send(address,20);
switch (reply.Status)
{
case IPStatus.Success:
Console.WriteLine("{0} {1}ms TTL:{2}", reply.Address.ToString(), reply.RoundtripTime, "");
available.Add(address);
break;
default:
//Console.WriteLine("{0} ({1})", reply.Status, address);
unavailable.Add(address);
break;
}


}

Console.WriteLine("Unavailable addresses");
StringBuilder sb = new StringBuilder();

foreach (string item in unavailable)
{
Console.WriteLine(" {0}", item);
sb.AppendLine(item);
}

Clipboard.SetText(sb.ToString());

Console.ReadLine();

}
}
}

SQL Data Transfers

Still working through the scripts required to transfer the data over, but at least now I have completed the mapping (and the target data model) for the new database.

My script to suck over all old data is working nightly and I have downloaded the requirements for reporting that will be needed.  I have also drafted a couple of report concepts that should start providing the needed  proofs that the transfers are working as expected.

At this point, it’s just a matter of working through any of the related tables that have ‘hidden’ customer data in them to make sure that we’ve got all the data captured.

Monday, April 27, 2009

Adventures in SNMP

I have been working with Dart’s PowerSNMP library today and have had some limited success.  Most of the examples that they have are fairly grandiose…that is, my intent is to have these little one-off commands that are easily executed by a user with little-to-no experience in administering network equipment.

Our Helpdesk guys are using a myriad of complex (or at least cumbersome) tools; my goal is to equip them with simpler interfaces so they can do their real work, i.e., helping the customer.

Where the Dart tools fall short is in the fact that there are very few ‘simple’ starters.  They have these big project templates that are very dynamic and have great user interfaces for folks that are in a discovery and administration role. 

I’m coming at it from an entirely different approach: we know the exact devices that we want to talk with, and we know exactly what we want to say.

Friday, April 24, 2009

Migrating from MySQL to MS SQL

The legacy database I’m working from is MySQL version 4.0 and I am moving to SQL Server 2008.  My migration strategy is fairly straightforward: at some point in time (hopefully the near future) we will be able to run a script at night that copies all data to MS SQL, then transforms it to the new data model.

My tools for this will be:

  • dbForge Studio
  • MySQL ODBC data connector
  • RegEx for search and replace
  • MS SQL Server Management Studio (SSMS)

The first step is getting all the table data over to MS SQL.  I don’t want to impact the live db and I want to be able to run the script as many times as I like, whenever I want to.  Depending on performance, I may even set it up for a nightly batch for the short term.

I don’t care about preserving indexes (which there are none) nor the identification of primary keys (of which there aren’t on all tables).  I need to be able to ‘flash’ my copy and then operate off of the data at that location.  I will not be sync’ing anything and I really don’t care if IDs line up.

All of the above statements on the state of the data actually help me out a fair bit.  While it may not be too efficient, I am free to DROP all the tables I have imported each pass and recreate (and populate) them.  This gives me the side-effect bonus of being able to ‘accidentally’ toast any parts of the data that I like.

Here’s another benefit: being in SQL Server, I can now start writing transforms that move data to my new tables, then output reports through SQL Reporting Services nightly to PDF, giving me a head start on some Raven development…but I digress…

So, here are the steps I followed:

  1. Establish a connection to the legacy database on the new database server through the MySQL ODBC connector.
  2. Get a list of existing tables from the legacy DB.
  3. Convert the list into a reusable script using RegEx
  4. Execute!

Establish A Connection

I installed the 5.1 build of the MySQL ODBC driver, but I failed to establish a connection to the database when I tried to create the system DSN. The version of MySQL that is running on the server is 4.0, so I took a shot in the dark and installed the 3.51 build of the ODBC driver and was able to connect.

You will need to name the DSN, know the IP address (or host name) that resolves to old server as well as the username and password of a user with appropriate privileges. I’m an arse, so I just used root. ;o)

Pick your database and away you go.

Next, pop over to SQL Server and navigate to your Server Objects in the tree, then expand and right-click on Linked Servers to create a new one.  Name your Linked Server; this will be the name you use when you reference the server in TSQL. 

Select Microsoft OLE DB Provider for ODBC Drivers as the Provider, specify MySQL as the Product Name and then use the DSN name you specified when you created your System DSN for the Data Source property.

All connected, and good to go.

Get Your List of Tables

This actually proved to be much easier than I anticipated.  You can likely use any client/editor/SQL environment that connects to MySQL, but I’ve been lovin’ on dbForge Studio.

I opened up my editor, selected the database and started a new query.  Brace yourself, but you’ll have to type two whole words: SHOW TABLES.  Nice.

Copy to the clipboard your results and head over to SSMS.

Convert Table Names to a Script

I love regular expressions.  I actually was contracted by MSDN magazine to write an article on RegEx as the release of .Net 2.0 was about to be made. Sadly (for me), the content of the article didn’t really fit the theme for a good 4-6 months, and by then there was a good amount of content out on the market.  Oh well, got my $150 from MS as a retainer. :oD

Paste your list of tables into a new query window (on any existing database).  We’re not actually going to run anything here, we just want to use the features of the editor to do some crazy-mad scripting wizardry.

After you’ve pasted, make sure there are no ‘extra’ blank lines at the end of the editor…it will save a clean up step at the end.

Hit CTRL-H to open up the Find and Replace dialogue.  In ‘Find what:’, type “^.*$” without the quotes.  This is a regular expression that will read a whole line of text. The caret is ‘start of string’, the .* eats all text to the $, which is ‘end of string’.

Next, paste the following into the ‘Replace with:’ field:
if object_id('\0', 'U') is not null drop table \0 \n  select * into \0 from openquery(YOUR_LINKED_SERVER_NAME, 'SELECT * FROM YOUR_LEGACY_DB_NAME.\0')

Basically, what we’re doing is using the regular expression engine to replace a table name with our little bit of script.  Anywhere the \0 appears, regex will inject whatever we ‘ate’ with .*, which in our case was a single table name.

Be sure to change the obvious placeholders for your linked server name and the name of your legacy DB.

Hit replace all and boom! you will have your script.  Save it out, then select all and copy to the clipboard.  You can close the query at this time.

Execute

Create your new database in SQL Server, then right-click on the db and start a new query.  Paste your script into the editor and…be brave now…hit F5.

Now go for coffee.  If you have a lot of data in your legacy DB, you’ll likely have time to go for coffee in a nearby neighbourhood.  And, if it’s really large, you might even be able to grow your own beans.

My MySQL server is on the same network and the DB I’m ripping has approximately 100,000 rows.  It takes just over 12 minutes for me to run the above.

I Win.

The most impossible table to deal with was using 63Gb of hard disk space for indexes and 96Gb of space for data.

MS SQL Server was giving me I/O errors and complaining about torn pages.  The SHRINKDATABASE command was just off in some useless cycle that wouldn’t respond.  Trying to CHKDSK the drive resulted in a 12 hour 1% fest.  Copying the data to another drive just froze the OS.

So, I did what any irrational person would do and convinced everyone else that they didn’t need that data any more.  The three billion rows of data were a little excessive anyways.

DROP TABLE ResponseTime

Tada!

The truth is that we only really care about the last seven days, and since it’s been down since Monday – and we won’t really be using it too much over the weekend – we decided that it was worth more to get the database back up and collecting data than it was trying to preserve last week’s data (along with the last 220 weeks of data) and just move on.

I recreated the table and the polling engines are all back online filling up our database as fast as they can.

I created two jobs for SQL Agent to attend to, to clean up data that is older than 15 days and to clean the older transaction logs. I also rebuilt the indexes on all other tables.  In the 232Gb file that is our database, we are now only uses approximately 400Mb of very non-fragmented space.

Before leaving work today I will most definitely try SHRINKFILE again…

Thursday, April 23, 2009

Ode to Solar Winds

Polling servers collect data
Database stores data
Hard disk fills up and starts to fail

My relationship with Solar Winds is already in jeopardy and, if all goes well, it will be very short lived.  >:o|

I spent the better part of today trying to nuke old data (5 years old+) that hasn’t been and will not again be used.

I’m fighting with a number of things: (very) limited RAM, a completely full drive, 99% fragmentation and over 700 errors in most of the highly-populated tables.  Not to mention the fact that the drive is failing and most repair attempts fail when I try to run them against the files.

*sigh*

Three weeks, three pinless grenades.  Sweeeeet…heheh…good thing I know how to juggle ;o)

Wednesday, April 22, 2009

And That’s How They Do It Near Blood

I was pulled in an number of directions today but did get a great chance to lunch with an old friend who’s now in a fairly senior position at the regional hospital.

After grabbing some Chinese buffet we headed over for a complete tour of the data centre and backup facilities, and had a few good conversations about virtualization, remote management, server room cooling and one heck of a large battery backup system with three cores connected to dually-redundant power supplies on all their servers.

In otherwords: yum.

Came back to the office and chewed through some planning stuff with one of my bosses.  Little bit of role reversal there as we had some fun and I challenged him to make a business case for a bunch of work he wanted me to do.

;o)

Tuesday, April 21, 2009

Spike.Prototype

Today I laid roots for the start of the helpdesk monitor, a full-screen WPF application that talks to a couple of webservices and keeps everyone at the helpdesk in tune with the calls that are in the queue and who’s talking to whom.

In developing WPF applications and the corresponding classes that will feed those apps, I’ve found that one of the biggest mind shifts for developers would be the use of dependency properties.

They are fairly trivial to implement, and the benefits are wonderful.  With the binding, animation and styling elements that come into play in WPF, they are also mandatory learning.

Adding a dependency property – which must be in a class that inherits from DependencyObject or one of its inheritors – is as simple as setting up a property as you normally would, and then using the static Register method on the DependencyProperty object.  Your getters and setters then just reference the DP you’ve setup, and at that point the internals of the .Net framework take care of the heavy lifting.

public string AgentName
{
get { return (string)GetValue
AgentNameProperty); }
set { SetValue(AgentNameProperty,
value); }
}

public static readonly DependencyProperty
AgentNameProperty =
DependencyProperty.Register("AgentName",
typeof(string), typeof(CallItem),
new UIPropertyMetadata("(Unknown)"));




As a developer we can now leverage the very cool binding benefits of our class, such that if anyone/thing/class/event changes the value of our property we can automagically start playing an animation and/or update our user interface without effort.  Seamless binding syntax in XAML allows for easy Observer-style up-to-datedness without having to do backflips or write a ton of boilerplate code.



Hrm…seems I need to find a better format for my blog posts where I’ll be pasting in code…

Dell Inspiron 9400 versus Windows 7 64bit

Long story short, Windows 7 won.

The laptop has an ATI Mobile Radion x1400 graphics card for which AMD won’t provide support.  I really like NVidia’s unified driver architecture for this reason.  It seems to me that I have never had a problem finding a driver for an NVidia card on any OS because they always support the card – even in a minimalistic fashion – with their drivers.

AMD leaves their mobile chipset support up to the hardware vendor, and won’t even provide a stock driver.

I highly doubt, after 2.5 years and two new operating systems, that Dell is actually going to post a driver for a card that only came in a $1500 laptop if you paid $300 to upgrade.

I’m still going to try out the 32 bit version and see how she flies.  No point in making the switch if I can’t use my graphics card (or the 17” 1900x1200 LCD) that came with the ‘puter.

Of interest: I recently pitted a Dell Zino against Mac Mini for purchase evaluation and also predicted that Apple will cause the end of the world.

Monday, April 20, 2009

Rephrasing the Business Vernacular

It is always interesting when you first join a team and everything they talk about seems foreign or new.  It usually doesn’t take long before you realize that much of what is said, many of the concepts and the overwhelming majority of acronyms are actually just different ways of saying the same things you’ve said all along.

There are instances, such as this new position, however, where everything is new to everyone. The company where I now work has expanded with triple- and double-digit growth since inception in a market that doesn’t know its product.  The technology is new, the people are new and the processes are new.

You can’t hire people with experience in this field; there just aren’t a lot around.

So it’s no wonder how the company has adopted terminology which works for them but seems like a foreign language to me; it seems foreign to some of them too! 

What has ended up happening over the last few years is that the people here have begun to recognize some of the processes as being similar, so, while there are Service Orders, Work Orders, Tower Tickets, Help Desk Tickets and Infrastructure Work Orders, they are all just units of work.

I am making it part of my job to introduce more common functionality to handle these very similar concepts and to merge much of the existing interfaces together to capture this same data.

The biggest strike against the framework as it exists (I’ll leave Classic ASP out for the moment) is that there are different data structures for all of the above work.

Today was a big data modelling day – again – as I was able to permanently strike off some 25 tables from the company data model.

And, if I’m successful in this attempt, some of that local language will soon be struck out of the vocabulary here too.

Friday, April 17, 2009

Performance-holic

I have made a number of configuration and code changes today that should improve the performance of EMS overall.  I’ve already had some good feedback and hopefully the boat will remain stable.

In particular, our users should notice performance gains when searching for and drilling into customers, service orders and work orders.

Here’s a list of some, but not all, of the optimizations:

  1. added indexes to all the common tables (the ones with the most records) to cover the most common fields
  2. increased the memory allocated to the query engine
  3. increased the caching limits
  4. modified startup parameters in MySql to allow for a bigger buffer and dictionary store
  5. increased total key and sort buffers
  6. scheduled daily worker process recycling in the application pool for IIS
  7. scheduled worker process recycling after every 5000 requests (was at 35000)
  8. reduced the number of applications in the EMS pool
  9. stopped unnecessary Windows services and set them to manual startup
  10. changed any queries marked as long running by MySql to include a where clause limiting the number of records returned (specifically for queries that end users don't see, like when the code is only used to template out the columns).

That should help us hold out for the short term. 

Now…off to the real work…

Found the Bottleneck…

…I think.

I did a scan of the classic ASP site (the UI for the existing EMS application) and found 1740 instances of the phrase “SELECT * FROM” in 280+ files.  1600 of those have no where clauses. 

There are over 2000 table joins across those queries, and the 140+ tables have no foreign key relationships defined.  Many don’t have primary keys, and there isn’t a clustered index in the lot. There is nary a stored proc.

Today I’ll sort through the worst offenders and try to find a way to improve performance as a band-aid solution for the next few months. I’m just hitting the most commonly used pages and using the “Slow Log” feature I found in the MySql admin console.

Thursday, April 16, 2009

Data Prep, Part 1: GPS Co-ordinates

I wish that there was a package out there that took any kind of data and was able to interpret it, validate it and correct any mistakes, regardless of the different symbols, notation or ordering used in the data as it was originally entered.

Actually, I suppose that package already exists.  It’s called a human. 

Unfortunately, they’re terribly inefficient and tend to bore when presented with mundane tasks, so we have to re-invent (a very small portion of) the wheel whenever we come across data that would give most DBAs nightmares.

I have to get this out there…I hate free-form text.

I just churned through an exercise in regular expressions that had me standardizing a set of 11,000 records containing GPS information.  In the end, there are only 100 lines that do not match the required pattern, which is less than 1% of the data, that will need to be corrected by hand.

For the most part, data was entered in some variant of DD MM SS.ss for both latitude and longitude. Sometimes those co-ords were inversed, which added a few headaches.  The range of characters used to express the text was as varied as the locations contained in the database.  When you see 9947126 in the database, are you able to tell if that means 99.4 71 26, or 99 47.1 26, or – the correct answer – of -99.0 47.1 2.6?

Not all of it could be done with regex alone;  we had section-township-range information which can be converted to a rectangular approximation.  We also had notes on some of the records, and some customers had work orders with even more hints as to how to solve the puzzle. 

Thankfully, the large majority of data was using the above stated pattern and I just had to deal with the user’s choice of delimiter (- 99*47.1’2.6”, for instance).

The other hurdle today was working through our dying pair of productions servers that are trying to serve up data from a database that lacks indexes, foreign keys, stored procs or a db version that caches execution paths.  Everytime you drill into a customer record or work order it pegs the servers.

We had three crashes today, eating up approximately 1.5 hours * 15 employees, so around three man days of time.  That’s real money there, so it was quite the load on my shoulders in my second full week on the job.

Wednesday, April 15, 2009

How We Failed

I had a chance to sit down and walk through a list of higher-priority items with several of the key stakeholders for the existing enterprise management application today.  We made some great progress and I was able to diagram out some of the NextMostImportantThing items that users are looking for.

It wasn’t the features they were asking for that shocked me, however, when we spoke; rather, it was the way they described them.

Generally speaking we as software developers have failed our users.  Perhaps, better said, we have failed ourselves because of what the users have come to expect, and what they perceive a ‘good’ user interface might look like.

For a long time I have been in the camp that claims, “Users don’t really know what they’re asking for.”  In a lot of cases I could argue that still might be true, though I would suggest a rephrase on that take to say “We haven’t equipped users with the vocabulary they need to best express the functionality they would like.”

Why is it important to put the blame on developers, analysts and architects?  Because it is not likely our end users who will come up with the next innovation in user interface and in order for us, as a community, to do that very same thing we must first admit that our processes are broken.

Take, for instance, some of North America’s leading applications in the CRM category.  I was recently contracted to a company to help them decide from the myriad of options out there what platform and package best suited their needs.  Vendor after vendor I became increasingly frustrated as I helped the company survive demos and walkthroughs that confused users and presented over complicated user interfaces for very basic tasks.

I later returned to that company – after their employees completed the three week training program for the selected package – to help a lady who was in tears because she couldn’t remember how to print labels.

I remember the epoch of the trainer’s failure who, when asked about how to print labels, replied very confidently, “Oh, that’s easy! All you have to do is pull up a query of the customers that you want the labels for, flip over to this tab, look for the tree node appropriate for the label size and type you’re looking for, then drill into the options to begin the merge.”

To you and I, that might sound easy, but lately I’ve been seeing the stunned look on those users’ faces when he said that over and over in my mind.

Our users are people, not computers. Data and queries and tree nodes and tabs and asynchronous operations mean nothing to the 45 year-old office worker, and quite frankly, to most 15 year-old so-called “computer whizzes” too.  We have polluted the user experience with elements of what makes our lives – as developers – easier to digest.  I love using a tree to walk through a class hierarchy or browse a project, but is that how people – note that I didn’t say “users” – think of their customers?  Is it fair of us to group everything not part of the application framework or the user interface into a bucket called “data” and then blame the users for not getting it?

I’m starting to wonder if great design begins not with understanding what the user wants, but rather with equipping the users with the vocabulary needed to express their requirements.  When they start thinking and telling us about the kinds of things they would like the computer to show them – and I’m not talking about endless grid after grid after table after filtered list – then we will be able to empower them to do the human side of work and start letting the computers express the data in more meaningful ways.

Even Smaller…

After further scrutiny, we are down to 24 tables that need to be converted…woot!  Some of the data is out-of-date by three years, so, although the structure will be important in the new system, the data will need to be re-entered.

Tuesday, April 14, 2009

More on Data Mapping

Today has been a bit of cross-eyed day working with an eight-year-old data model in MySql.

To start things off, I found a great little utility called dbForge Studio that would help me with diagrams and figuring out relationships and usage and all that jazz.

Then came the startling revelation that there aren’t actually any relationships in the database.  Or indexes.  And in more cases than I’m comfortable with, no primary keys.  Foreign keys are often stored as CHARs.

Well…at least I have a good idea of where the performance problems are…

Of the 146 tables in the data base, only 75 are currently in use.  Of those 75, 22 tables are used as categories or types, so they can be munged down into my RefCode structure.  Of the remaining 53 tables, about 12 or so seem to be related to inventory and won’t be used when we bring Passport online, and about 5 tables related to user accounts, meaning I really only have about 36 tables to worry about trying to convert to the new structure.

There are 11 tables beyond the 75 that we don’t yet know if they are in use.

The process has been to go through the corporate portal, go through the help desk site, go through the executive view of things, traverse the external web site sign-up process and finally…ask around.  No one knows about those 11 tables…do you?

SQL Server 2008 and Intellisense

I thought I was losing my mind for a few minutes this AM as I was having trouble getting Intellisense to kick in for part of my query.  I kept getting ‘Invalid Object Name’ errors on some of my table and columns, but it didn’t seem consistent at first.

I quickly realized that new tables and columns we’re getting picked up by the engine.  I closed my query window, started a new one and tried again.  No love.

So I closed SSMS and opened it back up…worked a treat.  So I said to myself it must be caching somewhere…and off to Google I went.

Apparently not a new problem, a post existed already on the MSDN forums. So I spin up SSMS again to use the Edit->Intellisense->Refresh Local Cache and *Bam* …not there.

At first, anyways.  You can only get at the Intellisense when a query editor is open.  My mind went straight there as I am completely on board with context.  Love it.

Here are the commands you’ll find in that menu:

  • List Members  (Control + J)
  • Complete Word (Alt + Right Arrow)
  • Parameter Information (Control + Shift + Space)
  • Quick Info (Control-K + Control-I, this is a chord)
  • Refresh Local Cache (Control + Shift + R)

Monday, April 13, 2009

Workstation, I Dominate Thee!

Today was a bit of a battle.  I was playing ping pong with my computer, various software installs, updates, service packs and was engaged in a few order of operation fights.

I did emerge victoriously, however, and my dev workstation is prepped and ready to go, featuring (but not limited to):

  • Vista x64 SP1, fully patched
  • Virtual PC 2007 (v6)
  • Visual Studio 2008 SP1 with Silverlight 2, Silverlight 3 and WPF toolkits and futures installed
  • Expression Blend 2
  • SQL Server Express 2008
  • The latest MSDN Library (local)
  • IIS 7
  • .Net 1.1, 2.0, 3.0, 3.5 with all patches and service packs
  • MS Office 2007
  • Live writer (go blog!)
  • ISO Buster
  • Virtual Clone Drive
  • Dart powerSNMP
  • Chrome, IE8 and Firefox.  Not Safari.  I’d rather take Singular for the C-64.

I was also able to get my DDL pulled off the temp machine I was using last week and have a new database setup locally to develop against.

Should be able to hit the ground running by Wednesday; tomorrow I’ll finish getting my servers running for my dev environment.

Mounting DVD and CD Images in Vista x64

I had a run-in with Vista 64-bit edition today in trying to get some ISOs mounted on my dev machine.  All the company media is out on the network as ISOs.

The road blocks stemmed from the fact that the 64 bit version of Vista doesn’t work with most wares I’ve used in the past to mount ISOs as drives.

A quick Googling helped me find, Virtual Clone Drive which comes complete with fancy sheep icons.  Nice.

It lets you create multiple virtual drives and then you can easily right-click on the drive (from Explorer/My Computer etc) and mount an ISO.

It handily keeps a recent ISO list so that remounting is easier if you repeated use the same ISOs (true for me as I’m right now configuring a bunch of VHDs for Virtual PC).

Thursday, April 9, 2009

Production Planning

Today was a whirlwind of trying to figure out all the jazz we are going to need in our production environment.  Though not immediately necessary, we need to align as best as possible with our dev environment early so as not to run into headaches later.

Everything in dev will be virtualized, so it should be straightforward to move to prod when the equipment gets here.

I spent a good hour or so fighting to get Virtual Server 2005 R2 running on Vista Business (both at SP1).

If you run into a ton of authentication errors there are two main things to try out:

  1. Run IE in administrator mode.  Right-click –> Run As… from the desktop (or a short cut) and setup a bookmark so you can hit the admin site more easily.
  2. Make sure the virtual server is bound to all IP addresses.

Without those two tricks you’re bound to have a difficult time getting virtualization running under Vista and managing the VMs successfully.

Wednesday, April 8, 2009

Model-Driven

Big day today as far as wins go on the data side of things.

I scratched through the old model several times scouring for the essentials and finding the broken parts.  I talked to several of the longer-standing employees about what parts of the system are important to maintain, which ones are high priority to move and which can be dropped.

I don’t yet have image space set up for this blog…I was wanting to post an image of my much-improved RefCode model that I’ve been hacking away with for the last number of years.

By normalizing all typing data we can better utilize a cache and never have to go to the DB for the simple list lookups.  This clouds some other processes (such as when you want to bypass/clear the cache) but also allows for very simple grouping and mapping between codes.

It is in the grouping bits that I made some great strides and will post soon what approach I took.

The predominant theme in the app will be feed consumption a la Twitter so I did a bunch of research on that as well.  Some great resources exist for Twitter architecture and it’s very easy to scale that back and trim down to what we need (don’t need to scale to 100,000 simultaneous requests here, just…maybe…four?).

Tuesday, April 7, 2009

Road Trip

We went up to visit the North head-end for the network today, where most traffic monitoring and shaping takes place.  Pretty cool setup, the office has about 20 displays and a myriad of computers, routers, access to VMs, RDP sessions all over the place, and waaaaaay too much ICMP traffic using up screen real estate IMnsHO.

Closed the door to a couple of APIs that we were looking at yesterday and made some requests to have the DNS records updated as well today, and we got our hands on a Fortinet 100 that should give us some added protection for our production servers.  Really need to get them out of the DMZ.

My first production defect came across the wire today, as well, and I got to address some data model issues that were 'breaking' a query.  A deeper look at the MySql model that was created raises some concern for expandability and maintainability, but we've got carte blanche to start over here.  The previous developer revealed the other day that the system was composed of endless layers of band-aids added over the years to meet the then-current needs.

I'm on the verge of establishing a game-plan here and the work is pretty exciting.  I don't have the years of back-thought on this project and how it came to be, but there are some pretty good people here -- and some that are still in touch -- that can back fill those stories for me and get this thing moving in the right direction.

Monday, April 6, 2009

Land Survey

Today's big effort was mostly in identifying the equipment that is here on the network...and a little bit of starting to imagine what we can do with it.

The call centre is obviously going to be a big component, as is knocking down the average install time.  These two are releated in a number of ways, as each related role creates work for the other and/or relies on information from the other role (CSR and installers, that is).

I found a couple of great libraries that I can use to talk to some of our gear. A co-worker -- who has great expertise in the Asterisk VOIP solution here -- and I worked through connecting to the administrative console and poking around with a few of the commands.  Pretty cool to be able to start a phone call with just the following:
Manager.Action.OriginateAction origAction = new OriginateAction();
origAction.Channel = "SIP/619";
origAction.CallerId = "Jimminey MC";
origAction.Context = "DialPlan1";
origAction.Priority = 1;
origAction.Exten = "SIP/800";
origAction.Timeout = 30000;

Still have to dive deeper, but we've got a good basis for an integration path now and are really excited about that.

I have a trial for Dart's SNMP library and will be playing with that over the next few days to see if it's the right library to leverage.  Collecting MIBs over the next few days to prove that out.

Friday, April 3, 2009

From Old To New

Learning a lot today about co-ordinate systems, GPS logging and various systems for identifying the locations of our towers (and other sites).

We are now working full bore to implement an intake procedure to automate the import of over 1000 records into our mapping software.

Finished some internal tasks (moving to internal DNS for some of our servers) in the interest of security and will continue with that next week.

Thursday, April 2, 2009

Chewing Through Database Changes

Most, if not all, of the active data connections right now are in MySQL and running off a machine that, in short, is a bit of a security concern.  The web server talks to the DB (which is on a separate, dedicated box) through ODBC.

I created a list of inactive accounts and, working through the list with a couple of long-time employees, removed over 50 accounts that still had access to the network, the staff site and the intranet software.

I've also changed all server admin passwords and updated all the ODBC pointers to a new single-purpose account on MySQL.  I've also got a plan with about two dozen touch-points that is being addressed by me and three others that work at the same level as I do.

Next on to IIS and the FTP servers that were running. There are over 160 sites in IIS and 100+ users on FTP.  Resolving those users with our other list, and cross-checking against active clients, we were able to stop 40+ sites in IIS (30 more are suspect) and firm up the FTP server.

We're getting a whole lot of traffic from someone trying to brute the admin account on that box...will have to watch that for the time being, but the bigger plan is to move it inside the firewall and close down that channel.  We're also scrubbing them now at 3 attempts (instead of 5) so hopefully that will slow their efforts.  I changed it to a strong pass, so by brute they're just wasting their time.

First Day

New job, new blog, great first day.

While I was employed to architect and develop the new management system for the company, I have also been charged with getting some basic security configured.

After a quick inventory, there are about 30 touch points that need to be addressed from a security standpoint; we'll tackle those tomorrrow.

The system that I'll be replacing has a lot of manual process. The core of the beast was developed pre-2000 and was actively added to for several years.  Over the last little while it has mostly been band-aids that have been used to patch it up and tweak it to meet changing needs.

It is a MySQL/ASP (classic) solution...and it's on a failing server, if not a server that is at end-of-life.

Great team here, good mindset, lots of equipment and high expectations.  Should be a blast.