Posts filed under ‘Web Development’

JavaScript: Regular Expression for Military Time

This post is more for my future reference than you.  Sorry to be so selfish!  🙂

I had a need to use a regular expression via JavaScript to confirm if data entered into a text field was in military time format (e.g. 03:30, 23:59, etc).   With regular expressions, I’m still very much a scavenger.  I do a quick search and “borrow” something similiar to my needs and tweak it as necessary. 

Well most of the examples I found for military time regular expressions were a little suboptimal.  They would do something like this:

/\d{1,2}:\d{2}/

Where they are saying before the colon you can have 1-2 digits and then after the colon, you’d have 2 digits.  But that would permit stuff like 98:86.  So they would have subsequent JavaScript code to parse out the data and make sure the first number doesn’t exceed 23 and then make sure the second number doesn’t exceed 59.  Lame.  I wanted to do it all in one quick regular expression.  This is what I ended up with:

/(00|01|02|03|04|05|06|07|08|09|10|11|12|13|14|15|16|17|18|19|20|21|22|23)[:](0|1|2|3|4|5)\d{1}/

I would not be surprised if Clint can optimize this further (I have solicited his help on regular expressions before)– but for now, it’s sufficient for my needs.  It makes sure the hour ranges between 00 and 23, it ensures a colon is entered, it makes sure the minutes don’t go over 59.

A quick sample of the verification in use:

<HTML>
<BODY>
<SCRIPT LANGUAGE=”JavaScript”>
function CheckMe(myvalue)
{
   var re = /(00|01|02|03|04|05|06|07|08|09|10|11|12|13|14|15|16|17|18|19|20|21|22|23)[:](0|1|2|3|4|5)\d{1}/;
   if (!re.test(myvalue))
    {
       alert(‘false’);
        }
      else
        {
           alert(‘true’);
       }
    return;
}
</SCRIPT>
<FORM Name=”Form1″>
<INPUT TYPE=”Text” Name=”TestMe” SIZE=6>
<INPUT TYPE=”Button” onClick=”CheckMe(document.Form1.TestMe.value);” value=”Click Me”>
</FORM>
</BODY>
</HTML>

March 15, 2007 at 10:06 pm 16 comments

User Interfaces – Web Applications and Buying Gas

This evening I got a new Facebook friend request.  That’s a welcomed event.  Previously I only had five friends and three of them were relatives– I looked like a loser!  At least now I’m only half relatives.  Today’s sucker was my colleague, Mark Duncan.  And I suspect I know why he joined.

This afternoon when discussing the new Qualtrax User Interface, we talked about Facebook.  Facebook has a marvelous means of data entry in a web interface.  As you are typing in little factoids about yourself (say your school or employer), some very responsive dropdown lists appear with similiar items people have already entered.    It’s quite slick and something similiar would prove to be very beneficial in our web application. 

Because our application is tied so closely to organizations’ quality systems, it is important for us to keep the user experience as easy as possible.  You don’t want someone to not take action on a nonconforming product because they couldn’t figure out how to use the online form.  You don’t want to add any obstacles– you want the process to be simple and smooth.

Last night, I was on the user side of things and got to see first-hand just how important “simple and smooth” is in a user interface.  I had to purchase gasoline.  It took me three tries. 

The first station I stopped at amazingly enough did not have credit card readers at the pumps.  I didn’t even get out of my car for that one.

The second station was more modern.  However when I swiped my card, the console prompted me,

“Credit or Debit?”

Well, I knew the answer immediately.  I wanted “Credit”.  Unfortunately, there was no “Credit” button to be found on the keypad.  There was a nice green key for “Debit”, but no “Credit”.  I was perplexed, but not for long.  When I didn’t answer the console instructed me to “Please See Attendent.”

I figured I must had done something wrong (user error) or there was some kind of malfunction (bug), so I gave it another go.  I swiped my card and once again I ended up with a dead end question.

“Well, if it is going to be like that,” I thought and got out my debit card.  I swiped it and when it asked me what kind of card it was, I smugly made use of the provided “Debit” button.  My success was short lived.  It promptly asked me for my pin number.  I typed that in watched as it was falsely accused of being wrong.  I gave it another shot, but all paths led to the instruction, “Please See Attendent.”

“Fuck that,” I thought and left.

I went to a third gas station.  This time everything worked without incident.  It took my credit card, it let me pump my gas and all was well.

Yesterday was an extremely long day– I woke up at 5 AM, traveled to Richmond, attended a funeral, ate lunch, attended a burial service at the cemetary, traveled back to Blacksburg, met some girlfriends for sushi and then finally, sought out fuel.  At the second gas station, all I had to do was suck it up and walk in to see the attendant.  Yet, even with my fatiquing day, I chose to get in the car and drive out of my way to another gas station.  Why?  I shouldn’t have to rely on a person to complete a simple task.

Our customers would share a similiar sentiment about software support.  Sure, the support team is friendly and helpful and quite a wonderful bunch to work with.  But still, you really don’t want to have to stop what you are doing and rely on a support team to help you complete a routine task.

So I guess this makes the moral from last night: 

If a product interface isn’t simple, people just aren’t going to do it.  And worse– they could be like me.  They could go out of their way to seek out another vendor.

January 19, 2007 at 12:43 am 15 comments

The Walls of Troy, Documentation and Log Files

Walls of Troy Lecture
This evening, I went to see Dr. Sarah Morris of the University of California at Los Angeles speak at Virginia Tech.  Her topic was “Apollo, Poseidon, and the Walls of Troy: Homer and Archaeology”.  She covered a large array of talking points– the excavation history of Troy/Ilium, the new technologies and practices that accompany modern archaeology, how the Trojan Horse may have stemmed from Greek memories of a seize machine, etc.

One note I found particularly interesting was her observation that Troy was the 24th city that was seized by the Greeks.  She mentioned there was even a city that was much bigger than Troy (sounded like “Pegalon”– but not sure of the spelling). 

So she asked, “Why Troy?  Why did this site become more important than the others?”

To answer that, Dr. Morris cited that there were six Epic Cycle poems, the Iliad and the Odyssey all written around Trojan War events/aftermath.  However, it seemed what she felt really solidified Troy’s importance was the continued prominence of the city/site afterwards.  She talked about pilgrims visiting it and she also shared a story about how a city cursed by Ajax the Lessor (aka Ajax of Locris) sent noble young women to Troy for years to serve as prietesses in the Temple of Athena.  Their gesture was an effort to redeem themselves from Ajax’s brutal rape of Temple of Athena priestess, Cassandra, during the war.

Granted, I’m just a layperson, but my biases from years of journal writing and work in document control have me feel without the documentation (even fictional accounts), the ongoing visits to Troy would not been enough alone to sustain its appeal.  In fact, common phrases throughout the lecture were “Homeric Troy” and “Homer’s Troy.”  We did not hear the phrase “Ajax the Lessor’s Troy.” 

Importance of Documentation – Monticello and Ashlawn Highland
I have a relatively contemporary example of the importance of documentation with historical sites– right from my home state of Virginia!  During the Fall of 2001, Sean and I visited the homes of Thomas Jefferson and James Monroe.  An excerpt of my November 18, 2001 journal entry:

The two homes were quite different. Jefferson’s had tall ceilings, unique architecture and filled with expansive book collections and interesting inventions. Monroe’s was a modest farmhouse, more functional and less showy.

The tours were a little different too. There was a lot of certainty regarding Monticello and its happenings. With Monroe there was a lot of speculation. A lot of “We don’t know [for sure]”s and “We think”s.

These two men lived in the same time, only 2 1/2 miles apart. They were friends. They died exactly five years apart.

So why the discrepancy in knowledge?

Jefferson wrote things down.

They gave an overwhelming statistic of just the letters he wrote. Perhaps 20,000 letters?

He documented daily life. He recorded his thoughts and opinions as well as the mundane.

We know so much because he wrote. We, 200 years later, still benefit.

The moral– write things down even little things about dry cleaning and toilets, even about the placement of nails. Write it down so the future won’t have doubt.

Back to Troy– remember those young ladies that were sent to be priestesses to redeem Ajax’s offense to Athena?  I’m told there was a lot of doubt and speculation about that transaction.  There were thoughts the ladies had to run a gauntlet when they first arrived at Troy and no one really knew how long they served as priestesses or how it worked.  The picture became more clear within the past few decades– when an inscription describing the legalities of the ladies was found in a completely different city.  The picture became more clear…. because of documentation.  🙂

Log Files
Maybe that is why I’m big in log files and audit trails in my software work.  I recently described the new QualTrax Error Handling, including our usage of low exceptions for logging purposes.  QualTrax has historically had a number of different log files that could be toggled on or off as needed.  Each service had its own log.  We had database connection logging, file access logging and of course, general error logging in the event log.  At the same time, every action to a document, workflow, user, group and test is recorded in an audit trail.  In my Laboratory Information Management System (LIMS) work with QualLinc, the importance on logging persists.  The three features that have the most potential for problems (PDF Generation, Processing Incoming Emails and Attachments, Distributing Batch Emails with Attachments) are logged heavily– allowing the system to record each key step for traceability.

In both applications, the availability of this extra documentation proves to be an invaluable tool and is usually instrumental in diagnosing an issue.  And when one is troubleshooting an issue with a software application…. one is trying to answer the some of the exact same questions archaelogists are struggling with:

“What happened?”

“When?”

“Who did it?”

“What went wrong?”

🙂

September 18, 2006 at 10:02 pm 3 comments

Word 2003 Crashes When Viewing Custom Properties Updated by DSOFile.DLL

Background Info
One of the neat things we take advantage of in QualTrax is Microsoft’s DSOFile.dll.  That DLL allows programs to view and/or update the properties of Office documents without requiring Office on the server or requiring the bulky overhead of Word Automation (which Microsoft does not recommend on a web server). 

This ability may not sound that exciting until you realize Office provides a Custom tab in the Properties window to let you record your own unique information.  QualTrax takes advantage of this to embed (and subsequently display!) specifics about the document and its lifecycle in the document itself– stuff like Revision Number, Publication/Effective Date, Editor, Expiration Date and even the Signature Manifestation for FDA 21 CFR Part 11 approvals.

Migrating this feature over to .NET 2.0 and C#, I ran into some peculiar behavior. 

Symptoms
The process worked beautifully when I was filling in “TODO” for every field.  As I started to flesh out the Document objects, I plugged in real live information.  That’s where I ran into problems.   Everything would run smoothly with no errors to hint something was wrong. 

After the code executed I went to open my Word document and it would open normally.  All seemed well.  However anytime I went to Word’s File->Properties menu, Word 2003 would crash:

Clicking on What data does this error report contain? provided little assistance:

Once I reopened Word– if I went to Insert->Field; selected DocProperty as my Field Name, I could see my custom variables listed in the Field properties list box and I could insert most of them into my document without crashing.  Saving after inserting those fields was another matter altogether.  😉

When I went to the file through Windows Explorer and right clicked on the Word file and selected Properties, the normal Word Properties window would come up.  Alas, when I clicked on the Custom tab, my fields were not displayed:

Troubleshooting
So I went through a fun troubleshooting experience.  First I thought DSOFile was not completely writing my file, but that wasn’t it and because I was using System.Runtime.InteropServices.Marshal.ReleaseComObject I knew the object was no longer interfering.  I thought maybe the approval record data was too long, but that wasn’t it. 

After opening and crashing the same document a few times in a row, Word takes the liberty of repairing the document for you and removing the what it thinks is bad.  That’s where I got a good strong lead– I could see it added all my fields until it got to a date field that would have been blank. 

“Eureka!” I thought, “It must not like that crazy 1/1/0002 12:00:00 AM date.” 

But my heart sank a bit when I ran to the code and saw that I had already accounted for that (Blast that foresight of mine!).  If the date was null or the system default, I replaced it with an empty string.  But for kicks, I changed that function to replace those bad dates with my second favorite test string, “TODO” (Side note– my favorite test string is “ISUCK” or variations such as sucker@isuck.com).

I ran it again and after a series of crashing and reopening, Word repaired the document.  This time it got past all the blank dates… but it stopped right before a text field that would have been set to an empty string.  So I intercepted those blank fields and changed them to “N/A”.

Ran it again and great success!  No crashes and all variables were accounted for.  Times were good.

At that point I was unenthused about having to account for the different languages our application runs over.  I was just thinking “What’s N/A in Portugeuse?” when Mark Duncan asked, “Can you use a space?” 

I changed my “TODO”s and my “N/A”s to a space– ran it through and Word 2003 liked that.  So there you go.  A blank space is perfect– no need for translations there!

Summary
If you are using a DSOFile.OleDocumentPropertiesClass object and calls such as dsoDoc.CustomProperties[index].set_Value(ref myValue); through C# and you started getting similiar Word 2003 crashes— be on the lookout for empty strings in your field values.  A quick little check may be all you need:

if (myValue.ToString() == “”)
  myValue = ” “;

August 25, 2006 at 6:37 pm 3 comments

Programming and Project Runway

It is roughly 11 minutes (plus an extra 10 minutes to build up a Tivo queue) until my latest TV obsession, Project Runway, commences.  I haven’t quite figured out the reasoning behind my fancy.  With the exception of some cute Bowman Handbags, I don’t own anything that the general public would deem fashionable (Unless Appalachian Trail hats that are too big for my head count). 

So I’d like to think my fascination comes from watching the lifecycle the pieces go through.  They start out as simple sketches, but after a visit to Mood Fabrics and 1-2 days of frantic sewing they become (hopefully) these beautiful garments.

Well, that may not be it either.  This past week, I realized I do a very similar process with my programming and I haven’t quite found the same level of entertainment there.  Nonetheless, here’s one recent example of my process and how it parallels that of the fashion designers on Project Runway.

The Challenge
My assignment was to take data we already had in the database and make it into a “Rolling 53” report.  Basically they wanted to take test data and evaluate it in batches of 53 to see how many positive test results there were in each grouping.  First they would look at the most recent 53 items.  Then they wanted to drop off the most recent item and look at the next 53 items.  

Sometimes the designers on Project Runway are given a dossier on a particular client which includes samples of past colors and styles used by that individual or organization.  Well, I got this cryptic Excel spreadsheet:

Sketch Time
If you look through my work notebooks, you’ll typically find a lot of little drawings of the screens or functions I’m working on.  Even when I don’t do a full blown specification, I still draw out what I’m doing and/or write down related database fields.  This project was no different.

Now in Project Runway 3: The Road to the Runway Special, Tim Gunn and the judges were evaluating some of the applicants’ sketches and they noted that drawing the sketch and making the garment are entirely two separate things.  I believe Michael Kors summed it up as, “They have no idea how they are going to make these clothes.” 

Well in programming, you have to be careful to keep your design realistic as well.  In my above notes, you can see I was already making notations about my logic.  There is a note about a for loop and I jotted down table names I expected to query.  The large vertical rectangles surrounding my “cells” (not necessary rectangular in the drawing) are particularly telling.  They depict my thoughts on how I was going to use nested HTML tables to achieve my look. 

Materials
With the Rolling 53, my Visual Studio 2003 Development Environment with ASP.NET and HTML syntax was sufficient.  However, in other situations, I may be shopping around for suitable materials (aka third party components).  In that case, I would certainly be keeping in mind that just like the Project Runway designers, the materials I chose would reflect the quality of the final product.  I would be judged on my material and my choices– I would be the technology I use.

Tim Gunn
In Project Runway, Tim Gunn serves as a counselor and initial critic to the designers.  For the Rolling 53, my Tim Gunn is a woman who is just as personable, understanding and frank as the Project Runway personality.  Her name is Debbie and when I showed her my initial work, she had one of those famous Tim Gunn pauses.  It meant she had a much different perspective than I and ultimately advised significant changes. 

Since I was armed with the original spreadsheet and I felt my design was close to what the customer described, I went with it.  In Episode 1 of Season 3, Keith did not heed Tim Gunn’s advice and his choice paid off– he won the first challenge.  Luckily, my risk was as successful as Keith’s.  But– I know very well that like Tim Gunn, Debbie’s advice is very credible and accurate and should always be given serious consideration.

Below is a screenshot of my initial work.  Very similar to Kayne deviating from Tara Conner’s color recommendation in Episode 2, I deviated from the color scheme in the original spreadsheet.

 

The Runway Show and the Judges
It’s a lot less glamorous than models, the L’Oreal Makeup Room, the TREsseme Hair Salon and the Bananna Republic Accessory Wall.  Our presentation came on a late Monday night using GoToMeeting, Internet Explorer and a myriad of cell phones.  The “judges” were a few key individuals spread out over a couple of time zones.  I could not see their facial expressions during the meeting, so like the contestants, I really did not know exactly how my work was received until the discussion at the end.  Turns out, this work had a positive reception.  One customer even claimed, “Perfect!”

But, just as poor Bonnie found out in Episode 4— sometimes even if your finished product stays true to a sketch that was originally approved by the customer, it may not meet their full fancy when it is all said and done.  And like Jay and Chloe who won previous Project Runway seasons, you also have to think about the creation in production mode. 

As a result, we had a few items come out of our meeting.  They decided they wanted to list an identification number in each cell for easier reference and in order to support printing on black and white printers, we bolded the positive results in addition to color coding them.  Our revised version looked like this:

“Make It Work”
During the final episode of Season 2, the designers found out at the very last minute, they had to add one more look to their collection.  Well, that’s not an uncommon occurrence in the programming world!  In the Rolling 53’s case, we got the report done and found out that it needed to be emailed to two email addresses twice a week. 

With time, I could write a service to deploy the information regularly.  I’m sure the designers of Project Runway have plenty of things they could do…if they had the time!  In both cases, we do have a time constraint and have to make do with what we got.  Enter in good ole Debbie (aka my Tim Gunn).  She volunteered to run the report twice a week and mail the results out. 

Thanks to Debbie, we came up with a very simple (and cost effective) way to “Make It Work”!

August 17, 2006 at 12:13 am Leave a comment

PDF Open Parameters

A couple of weeks ago, a customer asked me if it was possible to create a link that forces a PDF document to open in 100% zoom mode.  I told them I would look into but “I’m not optimistic.”  Turns out my instincts were WRONG!  Adobe provides a whole slew of Open Parameters for their PDFs– including one that allows you to specify the zoom level.  The full documentation for Adobe 7’s Open Parameters is below.

PDF Open Parameters – In Full Zoom
http://partners.adobe.com/public/developer/en/acrobat/PDFOpenParameters.pdf#zoom=100

PDF Open Parameters – 50 Percent Zoom
http://partners.adobe.com/public/developer/en/acrobat/PDFOpenParameters.pdf#zoom=50

July 24, 2006 at 6:38 pm 18 comments

You are the Technology You Use

I think one of the ongoing challenges of being a software developer is the blurred lines between your product and the technologies that compliment or are used by your product. That blurry border can often impact the impression of your work.

Complimentary Technologies
With QualTrax, a feature of the Document Control module is that it doesn't restrict what kind of file format you use– it will accept and control all kinds of documents.  Because of that flexibility we don't provide our own proprietary editor.  Instead we rely on the applications that are already targeted and fine-tuned for that file format, the same applications the users are already accustomed to working with.  For example, the engineering department can continue to use AutoCAD to update their DWG files.  Human Resources may utilize Microsoft Word or Microsoft FrontPage to maintain their Employee Manual.  Meanwhile other departments can continue to use Visio or ABC Flowcharter to upkeep their process maps. 

QualTrax

Well, a downside of that flexibility is often when a user experiences an issue in their editing application (Microsoft Word for example) — they don't draw a distinction between that application and QualTrax.  From their perspective, all they know is they are having trouble doing what they need to do.  Their struggle may just be a usage question like "How do I make this 'X of Y pages' bigger in the header of my Word document?" or it could be something that is directly related to a change or bug in the editing application.  A good (though old) example would be the Office 2000 release.   In that initial release, suddenly any Office 2000 document opened within Internet Explorer triggered a password prompt.  It was a Microsoft bug and it was rectified by Microsoft in Office 2000 Service Release 1.  Nonetheless, it was easy for a customer to get annoyed and think, "QualTrax is always asking me for a password." 

Third Party Controls
The waters are even muddier when it comes to the third-party controls you integrate into your software.  Unfortunately, I recently ran into an example with the RichTextBox control.  For this particular project, we were modifying a pre-existing web application to create a similar version for a different use.  In the original application, the RichTextBox control was a perfect fit and did its job well.  No complaints.  But when we went to deploy the new application, we had a series of complications. 

  1. First, the new web server was using ASP.NET 2.0.  ASP.NET 2.0 shipped with a new security feature, allowing Trust Levels to be set on a by-application basis.  The RichTextBox control required a Full Trust Level, but the web server being used was locked down to only permit Medium. 
  2. After we got over that hurdle, persistent popup messages were reminding us to purchase the component. 
  3. Once the purchase was made, the customer reported a series of issues with using the control.  It turns out in the original application, the requirements of the component were pretty standard– bolding and formatting text, maybe the occasional italic here and there.  The new customer was using tables heavily and through that usage, revealed difficulties. 
  4. You couldn't size tables by percentages; you had to use pixel count. 
  5. It wasn't intuitive how to resize the individual table cells. 
  6. Originally there was confusion as to how to align the text in a table cell (though that does appear to work well). 
  7. Further compounding the other issues, the RichTextBox control wasn't always refreshing its view.  That meant a user could be resizing a table cell, but not see the changes applied to the screen and therefore did not have adequate feedback to know they were successful.

Rich Text Box

Now don't get me wrong, there were other hiccups with the system.  But they were few and far between– a vast majority of the issues and frustrations arose from that one control.

Unfortunately, the customer's keen eye is not likely to isolate the control as the culprit.  It is the system that is deemed buggy.  Even though I, the developer, know the root cause, "buggy" is a label that still stings.  In the customer mind, there is no separation between the controls you use and the system.

Best Practices Thus Far
In the RichTextBox example, I can be smug and think about how I did not pick that control (I can report "buggy" still hurts though!).  But for the controls we are empowered to pick, it's extremely important when evaluating them to keep in mind that their performance and reputation directly affects yours.

At QualTrax, we pay significant note to the controls we include.  In fact, we have a dedicated Evaluating and Purchasing Third Party Controls procedure (falls under ISO 9000:2000 7.4.1 Purchasing Process) outlining our decision criteria.  We, of course, look at the threading model, the cost and the scalability of the component.  A couple of additional items we consider include:

  • Pre-Installation Requirements – What other components and software does this component need?  How will that effect QualTrax's Pre-Installation System Requirements.  Our customers work over a variety of databases, platforms and languages.  We by no means want to alienate a customer by adding a control that means they can no longer use a feature.
  • Deployment – How hard will it be to deploy the new component in the QualTrax CD Install?  Are there any specific permission needs?  It's been our experience that IT Administrators aren't quite thrilled with notion of the anonymous web access user being an administrative account.  If that's something the control requires, it is certainly going to drop in our favor. 
  • Responsiveness of the Support Team – This is key.  We've already established customer perception is tricky.  If an issue arises with the component, it's going to be deemed a "QualTrax" issue.  In the event that happens and we have to rely on the support team of the vendor, how effective and fast is that team?  If it takes the vendor three weeks (not to mention a lot of prodding) to get an answer to us, imagine the response time we'd have to our customers.  Once during an evaluation period, I had to email a vendor's support staff a question.  By the time that company sent back their first response to me a week later, the content of their answer made no difference.  I already knew they weren't the vendor for us.

In addition to those considerations, any component we implement in the software goes through a detailed feature-specific testing process on our varied platforms and with the same rigor we give our FDA 21 CFR Part 11 testing.  When we replaced SAFileUp with aspSmartUpload (for pricing considerations, not performance), the test script covered a variety of browsers including Safari, Internet Explorer, Netscape and FireFox.  The test script even specifically called for the download of files that had Japanese characters in the title.

Test Script

Leveraging Advantage Through Support
Despite one's best efforts in component and technology selection, with all the different customers, their different configurations, their different permission sets, their different usages and different levels of expertise, a question or an issue is eventually going to arise.

Although the difference between our application and other applications is very clear to us, it is our practice in QualTrax to assist with everything we can.  We are aware of and sensitive to the customer's perception and that's what we act on.  If it turns out to be a situation that needs to be expedited up to Microsoft, Adobe, Oracle or some other organization, we will help the customer in that effort as well– participating in conference calls or serving to "translate" the nature of the problem.

And suddenly the blurry line of what's yours and what's not can become an advantage.  From the customer's vantage point, you're not fixing Word or Internet Explorer or Visio or SQL Server or AutoCAD; you're fixing the problem.  Even if you are troubleshooting an outside technology– if you solve the customer's issue – you are just as much of a hero as you would be if you corrected your own code.

And what's the customer perception when it's all said and done?  That your team is the place to go for a resolution. 

I have an example for that as well!  About a year ago, one of our customers made an update to his server that broke all his web-based applications, including QualTrax.  Out of all the vendors, he called us first.  He said he knew we would be the ones who would fix the problem quickly.  His faith was well placed that day – we didn't let him down!

So in the customer's mind, you are indeed the technology you use. 

But how you choose to handle those perceptions… could very well trump all.

June 24, 2006 at 3:39 am 1 comment

Real Life Recapitulation?

In 1866, a German man by the name of Ernst Haeckel developed a theory called "Recapitulation".  He proposed that as it developed, an embryo would pass through forms of its evolutionary ancestors; it would "recap" the development of the species so far.  According to Haeckel, as a human embryo developed, it would phase through the form of a fish, then a lizard, then a chicken, then a monkey before finishing at human.  So each time a new life was conceived, the evolutionary process would supposedly play itself out again in the closed venue of the womb.

Haeckle 

Well, as it turns out Haeckel's findings have been long disproven and his credibility shattered!  Recapitulation has no place in modern biology.  But could the concept still be alive and well in programming?

I've been working on writing the QualTrax Document Control Engine in .NET.  It struck me today that my efforts have had some similiarity to QualTrax's past feature progression!  In some cases I started off with the base features– the features that existed way back in 1997 in Version 2.3 and then moved on to finalize the later features.  For example:

  • I started with the basic Document class and then I worked on approval routing.  After basic approval was working, my efforts turned to ensuring Serial Approval (introduced in 2002) was working. 
  • When QualTrax was originally released, HTM was the most popular document format.  Unconsciously, I completed with the HTML Automation and Automatic Link Conversion features before moving on to Word Headers and Footers (introduced in 2003) and then finally to PDF Conversion (introduced in 2004).
  • Once the document lifecycle was working, I tackled the Document Compare Add-On Tool (introduced in 2004) and the Out of Office Manager (introduced in 2004).

Unfortunately, like Haeckel, my findings have some flaws:

  • Yeah…I probably was not following the evolutionary path of QualTrax features.  Be it an embryo or a program, you have to develop the backbone early on– your framework for everything to follow.  In QualTrax's case, the backbone of Document Control just so happened to be included with the core product from the very beginning.
  • I can say I did Serial Approval after regular Parallel Approval– but in fact, the database and backend code were done concurrently.  I just didn't happen to focus on and test serial approval until after I knew the base was working.  😉
  • And the biggest flaw of all!  Before even the Document Control Lifecycle was approached, Folder Specific Settings was fully complete.  Folder Specific Settings post-dates all of the other features I mentioned tonight– it wasn't introduced until June of 2005!  Not only that, but brand new settings (PDF Security Settings) were added- all before the first Document Revision was ever started in the new system.

Well… it was an interesting thought to explore nonetheless! 🙂

June 19, 2006 at 11:33 pm Leave a comment

System.IO.File.Exists Default Directory with ASP.NET App

Derek Pinkerton and I recently ran into this working on the latest QualTrax code!

We have a class library that reads from a custom .config file— aptly named qualtrax.config.  One of the first things we do in that class library when we are retrieving a setting is verifying the file exists.   It’s a simple call something like:

if (!System.IO.File.Exists(“qualtrax.config”))
{
   //Our Error Handling
}

Because we are not specifying a directory, it is checking for the file in the current directory.  With debugging in our test Windows application, this “current directory” is pretty intuitive.  It’s looking for the qualtrax.config file in the \bin\Debug folder.

When we started using the library in an ASP.NET application, however, the expected location of the qualtrax.config file was not as intuitive.

Right off the bat, it was not able to find the file.   So I copied it into the website root so it sat side by side with web.config.  Still the file could not be found.  I put it in the bin directory.  No luck.  App_Data directory.  No cigar.  We knew it wasn’t permission errors because when Derek hardcoded the path, it was able to read the file just fine.  We just didn’t what directory it was trying to look into by default.

I was about to embark on a pretty inefficient journey when Derek said, “I’ve been meaning to try using filemon to see where it is looking.”

So it was FileMon to the rescue.  We installed it and turned on capture right before our System.File.IO.File.Exists call and then checked the log.

353 4:12:26 PM aspnet_wp.exe:5412 QUERY INFORMATION C:\WINDOWS\system32\qualtrax.config NOT FOUND Attributes: Error

This won’t be the end of our efforts on the matter, but there are two morals of the story thus far:

1) If you are doing a File.Exists call in an ASP.NET application without specifying an explicit directory and things seem to be missing– it very well may be looking in the Windows\System32 directory! 🙂

2) FileMon proves time and time again to be a very valuable troubleshooting tool.  In the past, we’ve found it especially helpful with permission problems.  Shame on me for not thinking of it sooner.

June 16, 2006 at 11:02 am 4 comments

JavaScript: Changing a Button’s onClick Programmatically Client-Side

Today we had a QualTrax customer who wanted to override the onClick call of one of our form buttons and add in a quick popup reminder.  Since this was a standard HTML button in a standard HTML form, it seemed like it would be fairly easy via JavaScript in the screen's flexible footer.  It seemed like we could just do:

document.FormName.ButtonName.onclick="alert('Here is a pop up message');"

However, when I tested that change, the results were a little boring.  The page just sat there idle.  No pop message, no errors, nothing.  So I did some research.

It turns out the onclick property of the button is expecting not a string like I was passing in, but an actual function reference.  A quick revisement produced the desired behavior:

document.FormName.ButtonName.onclick= function() {alert('Here is a pop up message');};

And actually– wrapping it in an impromptu function turned out to be quite handy.  There were already other activities going on with the button's out-of-the-box onClick event that we wanted to preserve.  We were able to just include them into the function:

document.FormName.ButtonName.onclick= function() {alert('Here is a pop up message');OriginalJavaScriptCall1();OriginalJavaScriptCall2();};

Now when they click on the button– they get the pop message and then it moves on to finish the rest of its usual tasks.

We tested this script call successfully in Internet Explorer 6.0 SP2, Netscape 7.2 and FireFox 1.0.4.

June 2, 2006 at 2:10 pm 14 comments

Older Posts Newer Posts


Flickr Photos

3D Printed Products

Tweets