Archive

Archive for the ‘Programming (and Scripting)’ Category

AppRefactory Website Down Temporarily

31-Jan-19 11:12 am EST Leave a comment
T

he AppRefactory Inc. website is down temporarily, while some of the hosting arrangements and Microsoft Azure cloud account changes related to our recent updated service agreement (as a Microsoft Solutions Provider) are being processed.  This will likely take another week or so, but another update message will appear here when the site is back up.  And, at that time, we will also be able to act as a Microsoft Cloud Services Provider (CSP) which will be of value to existing and future customers.

Please bear with us during this outage period.

Planets With Atmospheres: Almost Available?

26-Apr-18 03:00 pm EDT Leave a comment

 

160824102150-01-new-exoplanet-0824-medium-plus-169

Frontier staff have recently been heard hinting that planet atmospheres could be gradually rolled into players’ Elite Dangerous Experience soon!

F

or me, a veteran Elite CMDR who has been playing various versions of the game since its introduction in 1983 (yes — I am that old) being able to interact with planets regardless of whether they have an atmosphere or not is simply a basic feature.  Although the initial release of Elite back in 1983 offered only single-planet star systems where the “planet” was really just a line-art circle (whose surface would result in the loss of your Cobra Mk III craft if you ran into it), Elite II and Elite II: Frontier both enabled you to take off from partially-terraformed moon Merlin in the Ross 154 star system.  There, one could see the reddish sky and the eerie gas giant Aster dominating the skyline from the tarmac of the local starport with the lights of a nearby domed city also in-view.  Elite Dangerous has taken us back in some respects to an earlier time when such extravagances as being blasted to dust for not requesting tower clearance prior to liftoff from said planet-bound starport was but a glint in David Braben’s eye.  (Braben is, of course, the mastermind behind the Elite franchise as well as the original programmer.)

CMDR ObsidianAnt who runs an extremely popular running commentary on Elite Dangerous shares with us in his latest YT-cast a preview of what might (and should) be coming throughout 2018 and perhaps 2019 by merging the view of an Asp Explorer spaceframe with a short demo of worlds created using a tool called Space Engine, available for download here.  ObsidianAnt says that Space Engine and Elite Dangerous are “two very different pieces of software” in his video, but perhaps not being a software developer himself he’s missing some background.  Whatever code is used as the basis for Space Engine, I’m extremely skeptical at the outset that the two titles (the other being Elite Dangerous) can’t be integrated.  True, there are numerous tasks associated with software integration methodology, but speaking as a systems developer (my own strength) I’ve been tasked with taking two “very different” pieces of software and experienced some degree of success in getting the job done several times in my career.  Superficially, I’m not seeing any architectural issues or other seemingly insurmountable challenges.  Frontier Developments has a very capable team of software engineers, obviously — and it would be something just short of unimaginable to say a 3rd-party product like Space Engine can’t be made to work with Elite.

Of course, one must keep in mind the console platforms which might introduce challenges I could, in fact, not imagine.  But on the PC, it’s unlikely to my mind the effects we’re seeing in Space Engine can’t be successfully migrated to Elite Dangerous.  At the very least having a perusal of the Space Engine source could cultivate stronger implementations of atmospheres on the worlds of Elite Dangerous.

If you have a different take on this subject, please chime in with a comment below.

And regardless of the timeliness of new feature intros to the game — kudos to Frontier Developments, creators of Elite Dangerous, for creating a truly immersive and enjoyable spaceflight sim.  We’re all on the edge of our seats waiting for that next “big thing” to come out….we know you won’t let us down!

The Future of Elite Dangerous: The Great In-Game Debate

25-Jan-18 11:31 pm EST Leave a comment
C

MDRs IronJaguar and SoapyKnight joined me in an unarranged VoiceComms chat session this evening to discuss wing options.

Or so I thought.  Suddenly, we were talking about VR gaming and the collective disappointment with how long new features were taking to be rolled into the Elite: Dangerous universe.  As a software developer myself, I’m acutely familiar with how it’s produced.  Prior experience with the world’s largest software production company, Microsoft, has helped that education and acquaint me with the most modern practices involved with the full software development lifecycle.  I thought I’d bring this view to a pair of gaming consumers; one from New York and another a fellow Canadian who lives relatively close, geographically (which is not a given in the world’s second-largest country).  CMDR IronJaguar, in particular, laid the heaviest expectations on Frontier (the producer of the Elite game series).  Could he be convinced to be more understanding of the issues involved in producing Elite: Dangerous?  And what about Frontier’s competitors?  Where is Star Citizen?  What about EndSpace and From Other Suns?  Could they pose a threat to Elite’s dominance in the VR flight sim market at some point?

Watch today’s gaming session here to find out!

A Solid Programming Intro (for Beginners)

07-Dec-17 08:38 pm EST Leave a comment

 

Thumbnail Image
Microsoft Virtual Academy: Introduction to Programming with Python (#8360)
A

re you new to the world of programming?  I keep telling people it’s really quite simple and if one applies themselves, it’s something everyone can get into if they’re really that interested.  And no – you don’t have to go to College/University to learn how!

So what’s a good place to get into the world of software development fast and see if it’s something that might interest you?  Recently, I decided now would be an opportune time for me to pick up yet another programming language: Python.  It’s been getting a fair bit of attention lately and can be useful I discovered when exploring the emerging world of Artificial Intelligence (AI).  In fact, I did study AI while attending a pre-law programme at the University of Manitoba many years ago.  (Will forego saying how many.)  There I was able to get into the world of AI through an unlikely major: Philosophy.  The Computer Science (Comp. Sci.) programme wasn’t offering any curriculum in the universe of AI yet and it would be a few more years before the Internet made programming attractive as a career choice for me.  But I’d already taken an Intro Comp. Sci. course with prerequisites waived by the Dean of Arts and had amassed a fair bit of technical skill through my exploration of computers as a personal interest.  I knew the opportunity to study AI wouldn’t likely come again while I was at school so I signed myself up.

What has any of this to do with Python?  Well, some feel that being a self-taught programmer puts one at a kind of disadvantage.  I feel strongly they’re wrong about that — although there is a lot of reading one needs to do to get up to speed on programming theory and data management before they can safely claim they’ve got a Comp. Sci. equivalency.  And then there’s the environment of a University that just can’t get replaced.  Even so, online study can make you a productive resource in many organizations including those that don’t offer employment to anyone missing a Comp. Sci. degree (or lacking the opportunity to get one).  I came across a curriculum in picking up Python that offers a performance transcript and even a certification for paying customers.  The curriculum itself is, however, freely available and geared toward the new programmer.

Why might an experienced programmer take this course?  As one of the instructors points out, a programming language is like a spoken language in that if one doesn’t use the skill, it can become “rusty” and eventually even require retraining.  So while tempted to dive right into Python syntax, you might find it helpful to take the two-day course or at least challenge the exams that come with it (at least the paid edition, which is reasonably priced by the vendor, Microsoft) and re-verify that you’re up to speed.

Alternatively, if you’re in a .NET Certification programme, you can find that this material will nicely compliment the other available materials out there.

This course wins a rare 5-stars from me!

Project “ARTeRMis” Site Published

15-Nov-16 12:25 am EST Leave a comment
spedgewaterico1024

Link to “Edgewater” Tenant Site Prototype

P

roperty Management Application(currently code-named Project “ARTeRMis”) moved a step closer to delivery of a much larger property management tool based on Microsoft SharePoint today with publication of one of the trial components: “Edgewater“. This component is simply an amalgamation of a number of different elements native to SharePoint, but hosted in the Office 365 environment and is setup to product test the suitability of them for inclusion in the TRM (Tenant Relationship Manager) application delivery going forward.

Artermis will ultimately be heavily dependent on Office 365, SharePoint and ASP.NET MVC when it ships; currently forecast for initial delivery sometime in 2017.

Fresh New Look for The AppRefactory Inc.

03-Nov-16 09:23 am EDT Leave a comment
A

fter 3+ years hosted at Weebly.com, it was time to finally take The AppRefactory Inc. company website into a modern hosting environment with features and integration potential that would allow us to demonstrate, albeit in brief, what ASP.NET MVC could offer.  Dynamic product listings with breadcrumb sub-navigation, upload sections for partner contracts and résumés; and database-driven contact forms that make it easier than ever (and convenient) to stay in touch are all just the beginning.  In the days ahead we still expect to add:

o

The AppRefactory Inc. website redeployment announcement graphic: http://apprefactory.ca

  • Links to customer features site (requiring login) via Office365, Visual Studio (online ed.) and SharePoint,
  • Highlights and links to ongoing software development currently being undertaken by the company,
  • Book time online with a consultant to review your software service needs or setup an in-depth remote service session online through HackHands.com,
  • Subscription for partner companies and contacts looking for email updates consultant availability and/or major site & service offering revisions, and
  • Links to WindowsStore.com and related sites for specific product integrations (Windows desktop, server and phone all to be included).

So stay tuned!  There’s much more yet to come….and you won’t want to miss any of it.

(Additional graphics related to the new website can be found on our Yelp.ca listing.)

We’re Baaaaaack……

23-Aug-16 04:47 pm EDT Leave a comment
D

ue to certain issues with the “free” WordPress/IIS host I’d previously been using on and off for the past couple of years, I’ve ended my experimental hosting experience and returned here after all.  A couple of minor articles were deleted — but nothing too critical.

So I’ll resume in the weeks ahead posting here on articles of interest mostly to me, but perhaps to some of you out there as well. 😉  Hope the summer is going well for all!

Why cloud computing is still a hard sell, but doesn’t have to be (Re-Blogged)

27-Sep-14 10:43 pm EDT Leave a comment
V

ery candid exchange between two enterprise-tech pundits on the current state of affairs in the cloud space. Can the cloud save you money? As is so often the case, success is typically found in the execution as much as being duly responsive to customers. Commentators from Ericsson and Apcera offer perspectives on their own experience which might well be mirrored elsewhere…

Gigaom

The definitions of cloud computing have shifted a lot in the past several years, but a few things never change. Whether it’s located in an Amazon data center or a company’s own, whether it’s virtual servers or an entire platform for deploying applications, the cloud is supposed to serve many users, it’s supposed to improve flexibility and it’s supposed to save money. It all sounds great, but these guiding lights don’t always jibe with existing attitudes toward security and compliances and the systems put in place to enforce them.

On this week’s Structure Show podcast, we interviewed Derek Collision (above, left) — founder of a company called Apcera that’s all about making it easy to enforce policies while gaining the benefits of cloud computing — and Jason Hoffman (above, right) — the head of cloud computing at Ericsson (and former founder and CTO of Joyent), which just invested millions of…

View original post 588 more words

AR HelpOuts Launched!

10-Sep-14 08:07 pm EDT Leave a comment
T

he AppRefactory Inc. launches its first service offering today with the debut of a partnership with Google Inc. through Google Helpouts.  This further enhances the company’s service offerings in the application maintenance and support space; but also extends its services to more generalized support of the tools and technologies it uses throughout its service delivery process.  Support is being offered through Google Helpouts for technologies and platforms like:

  • G HelpoutsLogoMicrosoft Visual Studio (all ediitions, 2005-2013)
  • Programming Language Support / Tutorials:
    • Visual C#
    • Visual Basic / VB.NET
    • Java
    • JavaScript
    • HTML
    • XML
    • SQL
    • VBScript
  • Microsoft SQL Server
  • Microsoft Team Foundation Server
  • Microsoft Windows / Microsoft Windows Server
  • Microsoft Office / MS Office VBA
  • Linux (Ubuntu)
  • Apache WebServer
  • Microsoft Internet Information Server
  • Microsoft Windows Communication Foundation (WCF)
  • Microsoft Windows Workflow (WF)
  • Microsoft .NET Framework
  • Web Services

…and much, much more!

Google Helpouts also offers payment features that allows either the business or individual user to use services on a demand basis easily.  And with this launch, the service is being offered, for a limited time, with a free support instance — giving potential customers an opportunity to “try-and-buy” for a fixed 20-minute session, without charges or fees applied.  (See Google Helpouts terms & conditions for more info.)

AppRefactory Inc. Website v1.0 Complete!

03-Sep-14 04:30 am EDT Leave a comment

W

ebsites don’t ordinarily get version numbers; but in the case of The AppRefactory Inc. website, there may well be an exception.  Although the website was technically delivered on August 21st, some last minute technical details (including a DNS issue that needed resolving) delayed the declaration of “mission accomplished” until today.  However, we can now safely state — and unequivocally — The AppRefactory Inc. website has been officially launched.

Thunderous applause, please!

Just to quote the official announcement:

The AppRefactory Inc. has launched its website, bringing with it information about a number of its service offerings and other basic information about the company.  In addition to acting as a tool for making the general public aware about its services, the weeks and months ahead also promise the excitement of new title product launches plus its integration into other projects (already being developed) as a platform for a host of Internet-based services growing an ever-larger, steady stream of new users of every type.

Please review the content and watch for what’s coming soon or learn more about what we offer today.  And check back soon – because even more is on the way!

Next, my attention turns to uploaded the final release of AR CamFeeder which has been sitting on the backburner for the past few weeks while I got distracted by another project.  But it won’t be long before I’ll follow-up about that and the next project behind that – already all queued up.  Like the announcement says: stay tuned!

Microsoft Buys Nokia

03-Sep-13 12:24 pm EDT 1 comment

J

ust last week, following a discussion with a potential business partner, I’d found myself doing something I’ve done a few times over the course of my career — wondering whether I was making the right choice sticking with being “a Microsoft technology expert”.  Typically, such ennui occurs during downtimes for the software giant….and there have definitely been downs with the ups in the 30-year-long Microsoft saga.  But with the announcement late yesterday about the Nokia buyout, I think I may have learned to recognize such feelings as moments the really herald the coming of a big announcement or some influential development; as once more, my momentary doubts about sticking with Microsoft were immediately laid to rest.

Nokia, for its part, hasn’t been doing well in the smartphone market — not even as well as Microsoft’s own Windows Phone operating system — in an industry dominated by Google’s Android and Apple’s iOS.  During its now outgoing CEO’s (Stephen Elop) reign, Nokia shares dropped an extremely disappointing 85% giving pause to any notions one might have toward thinking of him a replacement for Steve Ballmer (who’s also in the midst of his own departure from Microsoft).  Nokia was already licensing Windows Phone from Microsoft so some have said not much else is likely to change at the former Finnish cellphone giant.

In the end, Elop (a Canadian) may have been partly behind an engineering of optics in league with Ballmer to succeed the latter at Microsoft.  But along with those optics will be those of a renewed momentum for the Windows Phone OS, which can only be a good thing for those of us believers in the Microsoft brand.


Story supporting links:

Dr. Dobbs: Software Development Trending to be More Complex, Not Less

28-Apr-13 01:14 pm EDT Leave a comment
T

here aren’t many advantages to being on disability for the past several months – but as I’ve recovered, looking for work and taking on the challenges with possibly getting my own software projects closer to completion has caused me to reflect on how software development has changed over the course of my career.  Imagine my shock at finding out I wasn’t alone in this realization this weekend, when I ran into a Dr. Dobbs article that articulated more clearly than I ever could (available free time notwithstanding) exactly what this revolution in app development is all about.

Chart above: “Fraction of programmers (y-axis) who spend x amount of time coding in a given language in 2012.  Note the big spike on the left and the mostly sub-2% numbers for programmers coding more than 50% of the time in one language.” (Source: Dr. Dobbs Journal, 03-Apr-2013)

My lead project is actually an upgraded version of a strategy game that’s been in the public domain for quite a while; but has the simplicity necessary to effectively permit interfaces to a number of different platforms – and with them, the necessity of leveraging a number of different technologies to make building and maintenance practical.  What will this mean software development as we close on 2015 or even 2020?  Likely what’s happened before – amalgamation to facilitate the creation of single-vendor solutions so that the process is re-simplified.

But until that happens, coders like me are gonna be left to absorb multiple platforms and become jacks-of-all-trades (and hopefully not lose the mastery of some in the process).

C# or VB.NET?

14-Jul-11 08:24 pm EDT 4 comments
Poll hosting courtesy: Polldaddy.com.
N

o matter how much time passes it seems, the question is always being asked on one project or another: is Java better than Visual Basic?  Is C# better than VB.NET?

Linked-In has been playing host to a lengthy, but at times interesting discussion on this question which seems to have an obvious, short answer.  Yet in the discussion are useful lessons for less experienced programmers that should be taken to heart…

Some highlight replies I selected from the whole thread:

Read more…

Anti-Microsoft Bigotry Finds New Ammunition in Search Results Scandal

02-Feb-11 10:03 pm EST Leave a comment
At left, Google searched for the correct spelling of "tarsorrhaphy" even though "torsoraphy" was entered. Bing manages to list the same Wikipedia entry at the top of its results.
Google searched for the correct spelling of "tarsorrhaphy" even though "torsoraphy" was entered. Bing manages to list the same Wikipedia entry at the top of its results.” (Source: FoxNews.com; associated article here.)
G

oogle and other players in the information technology (IT) industry say Microsoft is guilty of “industrial espionage” in the wake of catching the software giant displaying results originating from Google itself on the Bing search engine’s results page (which is operated by Microsoft).  The charge itself is surprising; but perhaps almost as surprising is that a company with the name-brand recognition, market share and raw success of Google would float charges as ridiculous as “espionage” is in this case – in public.

It’s all a product of an ongoing and, really, tired theme in the IT sector: techno-bigotry.  It’s existed for years between the two mainstream, competing platforms for Internet-based application delivery: on one side you have Microsoft Corporation which used to be criticized (rightfully) for offering a heavily proprietary solution architecture; and on the other, what I term “the Java alliance” – which is really an architecture that at key points conforms with a loose agreement on industry standards and technologies that are based upon “open-source” development principles (though there are many elements which can be proprietary in nature).

There are those who’d dismiss the Google announcement concerning the alleged Bing results replication as merely the product of the fiercely competitive web search sub-industry – that it’s all about optics and trying to make Google appear more innovative than Microsoft (yet again).  But this is a hugely simplistic view of Google’s real motives.  After all, the information being contested in this complaint is either “out there” – visible to the public; or at least any member of the public equipped with an application capable of reading the web protocol "HTTP” (a web browser), or voluntarily shared with Microsoft by individual users (i.e. data shared though the Bing toolbar or other available “clickstream” data, acquired by legitimate means.  Normally when one conducts espionage, one is surreptitiously (and unlawfully) getting information which has value both as intellectual property and as information that offers competitive advantage (which, in the IT sector would typically be technology that nobody else has).  Typically, such technology is the product of innovation by the company holding it.  So did Microsoft – which admits it did present results in a fashion very similar to Google – commit espionage or, as one analyst claimed, “cheat” doing what it did?  The answer is yes, certainly; if your definition of espionage and cheating includes using information that was broadcast without encryptions or other protections of any kind into the public domain.

JavaDissDotNet
Technology bigotry is so ingrained in the IT industry’s culture; there are very real parallels with college sports, complete with slogans, mascots and meaningless, ad hominem arguments as to which team is better.

My definition of both espionage and cheating differs from that conclusion (as does virtually every published lexical reference I could find online).

Beyond all of this, were Microsoft really guilty of espionage, Google would not be making claims so publicly about their “sting”, as they call it.  Microsoft would be dragged up on criminal charges and Google would be very tight-lipped about what claims it was making in public, notwithstanding the usual statement in such circumstances, “We cannot comment because the matter is before the courts.”  (Particularly in the litigation-prone United States of America.)  So why is Google trying its would-be espionage case in the court of public opinion? In fact, there are many reasons.  For one thing, Google wants to highlight its position as the leader of search technology, because Bing (Microsoft’s search product) has been gaining ground.  And, lets face it, search is Google’s “crown jewels” – just as Microsoft Office products are its “crown jewels” (alongside the Windows operating system).  Google will do anything and everything (within the scope of lawful conduct) to defend its web search property.  In charging Microsoft with “cheating” like this, particularly to the largely non-technical advertising and marketing business audience, Google is attempting to make Microsoft out to be a company that just can’t figure out how to beat Google by innovating on its own.  The trouble is, everyone already recognizes Google as the undisputed leader of web search.  So is there something else Google gains in all this?  You bet!  There’s another audience of note: software developers (like me!).

Web developers and software developers are often overlooked as a relevant crowd in such stories by the mainstream media; but don’t think for a second both Google and Microsoft  don’t spend a lot of time, effort and cold, hard cash wooing developers to use their products.  Why?  Because when software-based solutions are created, the size of the pool of resources available to maintain and upgrade the resulting products are a key consideration for IT managers – which translates into determining how much those solutions end up costing in the end.  In general, the more developers there are whose expertise gravitate to one particular toolset, the less costly that toolset is.  And at the moment, Microsoft is winning the battle for the hearts and minds of software developers (mostly due to the de facto capitulation of Java through IBM’s acquisition of it, via the Sun Microsystems transaction, back in 2009).  In this developer’s opinion, Java has lost much of its momentum throughout the industry as a direct result of IBM taking control of the technology.  And software professionals are aligning their careers accordingly.  But Java’s legacy can’t be underestimated – it is still to be found in many spaces and the Java language will remain a relevant, sought-after skill for several years into the future at least.  And Google can be thanked for this, in part.  As a third-party company, Google is at liberty to offer integration to any partners it prefers…and it is obvious that while it is possible to integrate with many Google service offerings with Microsoft technology – it is not rolling out the red carpet to Microsoft’s .NET platform, nor the Windows operating system by any means.  Indeed there are service offerings which are exclusively available only to the Linux operating system, which is one of the top three competitors to Microsoft Windows.

From a business perspective, this lukewarm reception to Microsoft integration makes some sense, since increasingly Google and Microsoft contest the same service paradigms.  Search is only one example.  Google Docs is a direct competitor to Microsoft Office, Google Desktop is a direct assault on both Microsoft Live Essentials and Microsoft Search technologies.  If Google is to gain mind-share amongst the developer population and someday be able to threaten Microsoft’s dominance in the server room (which is its ultimate goal, I believe, since that’s where the big money is), it really needs to do what it can to discourage adoption of the .NET Framework.

So expect more spectacles of one sort or another with this core theme exhibited as part of a long-term strategy to beat Microsoft.  And I say long-term in the full sense of the word.  Not only is Google not yet directly challenging Microsoft in the operating systems space (which it needs to do in order to get through the server room doorway), but Microsoft has played this game before…and always won.  It beat Java with .NET.  It beat Netscape with IE.  It even beat Sony and its PlayStation with the XBox.  But Microsoft’s never taken on a company quite like Google before…a company as innovative and fast-paced as Google.  Google won an early battle stifling Microsoft’s foray into online services with its Microsoft Live web properties; but Microsoft countered by making a huge consent-based investment in Facebook and continues to increase that investment while partnering more and more closely with the near-monopoly it holds on social networking.  The game is too close to call at this point.

And expect the techno-bigotry to continue….with all is parallels to college sports; slogans, cheers, mascots and meaningless ad hominem arguments as to which team is better.

AddThis Chrome Extension Displays Empty Service List: Solution Found!

25-Jan-11 08:12 pm EST 4 comments
I

make it a point to try and share solutions I find to any computer issues that are particularly disruptive – or those which prompt me to post to support forums seeking assistance.  This is partly to ensure others who experience the same trouble as I can find the solution themselves somewhere at the very least (particularly if I encounter a problem that seems to have no solutions posted since I make it a point to do research before asking questions — RTFM, right?), and to expose my approach to public scrutiny in case there’s a more efficient method I’ve overlooked.  And, by all means, please add your comment(s) in this blog if you’ve got something to contribute.  More comments on a given topic increases the likelihood of matching searches on that problem topic.

AddThisServicesDisplayed

There should be a list of services which would allow the user to select their favourite sharing mechanism for any web page displayed, as in the view above.

Synopsis

And so, what problem got solved?  Well a rather mysterious behaviour was being reported by several using the latest update of Google Chrome: the extension installs successfully, the orange “plus-sign” that serves as the AddThis icon appears in the Chrome toolbar – you click it, and the bubble containing the list of services you could link the currently displayed web page to appears….containing no services whatsoever.  Puzzled, you then right-click the toolbar AddThis button which yields the typical pop-up with “Options” menu item only to find a similarly empty service list customization screen.  Where’d all the services go?

Well after theorizing that it was a problem with Microsoft Windows 7 or Vista (since the XP machines I’d tested it on seemed to have no trouble loading it properly), but then discovering this wasn’t the issue, I finally tracked down where Google Chrome stores all the extensions logic on client workstations and started examining the JavaScript.  Eventually I realized one of the key JavaScript files which was not stored locally on the client workstations wasn’t being loaded…thus causing the locally-stored JavaScript to fail at exactly the point where the service lists are displayed.

Was it a networking problem?  Sort of.  In Windows it’s possible to store a text file in a key Windows system subfolder to override the network (IP) address of a given URL or web address  (or to be more technically accurate to override the IP address of any specific DNS entry).  It’s called a “HOSTS” file – and it’s typically stored in a folder matching the path “C:\Windows\system32\drivers\etc\”.  Now, by default, this file contains only some commented-out information when Windows is first installed.  It’s expected that if you choose to edit the HOSTS file, you are aware that entries made therein will override network addresses for websites as far as your machine is concerned.  (Obviously changing a file on your own system isn’t going to alter network addresses for everyone else on the Internet.)

One popular use of the HOSTS file is to take a phishing site or perhaps sites featuring adult content or other spamming web sites and assign their URLs to a special IP network address which refers to the machine the HOSTS file itself is on: 127.0.0.1.  (This address referred to as “localhost” or “loopback”.)  Why would anyone want to do that?  Well if the hostile website you’ve been forwarded to by accidentally opening a link in an email that appeared legitimate or perhaps a virus wants to send sensitive info from your machine to a known hostile URL – adding the address to the HOSTS file and overriding its destination back to your own machine nicely prevents the harmful or undesirable network access from occurring.

Beyond this, there are a lot of advertisements online which can slow down performance and you might not want to have to deal with pop-ups and display ads all the time.  (I’m in that group!)  So I periodically obtain updates to my HOSTS file from a Microsoft MVPs website at http://www.mvps.org/winhelp2000.  There are a number of helpful tools offered at this site; but I am interested in the HOSTS file because it eliminates a lot of the annoying and threatening content online.  Unfortunately, despite its utility, there are some URLs which are actually useful but which, for whatever reason (they vary), the HOSTS file author(s) have determined are a threat or otherwise undesirable.  Among these addresses were:

# 127.0.0.1  s3.addthis.com
# 127.0.0.1  s7.addthis.com
# 127.0.0.1  s9.addthis.com

Conclusion

So what this article discussed was the cause and solution for one possible scenario that could cause the absence of services being displayed from all elements of the AddThis interface in the Google Chrome web browser platform (all versions which support extensions so far).  It was discovered that specific network settings (located in a text file with a default Windows 7 pathname of “C:\Windows\system32\drivers\etc\HOSTS”) blocked a network address upon which the AddThis application is entirely dependent was blocked, causing the aforementioned absent services behaviour.  While the HOSTS file was the cause of this particular issue, it stands to reason that any network management tools or software (eg. anti-virus/spam, Windows firewall or other firewall management hardware/software, etc.) could potentially cause the same behaviour.  If you are experiencing the behaviour described above, your troubleshooting efforts should include checking your network settings – especially those which could block IP network addresses.

As always; comments and questions are welcome.

Code Shock

17-Aug-10 12:22 am EDT Leave a comment

I call it “code shock”.  Every so often I stumble into some code that at least at the outset defies all attempts at understanding.  And, I am perfectly willing to admit — I don’t know everything.  (Note to recruiters without a strong technical background who take the time to read my blog entries: I’d argue that being able to say that is a good thing!)  Yet despite the suspicion I’m missing some brilliant approach that I could leverage myself at a later date, the following method made me do a double-take recently: 

    8     public class Behaviours

    9     {

   10         private string name;

   11         private Type type;

   12 

   13         public void GetResult(string name, Type type)

   14         {

   15             if (name == null) { throw new ArgumentException("A name must be provided."); } 

   16             if (type == null) { throw new ArgumentException("A type must be provided."); }

   17             name = name;

   18             type = type;

   19         }

   20 

   21         public string Name

   22         {

   23             get { return name; }

   24             set { name = value; }

   25         }

   26 

   27         public Type Type

   28         {

   29             get { return type; }

   30             set { type = value; }

   31         }

   32     }

 

The problem, of course, lies on lines 16 and 17 in the above listing.  The method these lines belong to have accepted parameters called “name” and “type” above on line 13; yet the developer who implemented this saw no issue assigning the values to themselves by using the same names.  Since “name” and “type” already equal themselves, respectively — the logic is redundant and, indeed, the C# compiler throws a CS1717 warning (“Assignment made to same variable; did you mean to assign something else?”).

There are a number of other issues with this class too.  Indeed, it might be an example worthy of presentation to a class of newbie C# students or perhaps interview material for a junior programmer.  In any case, it’s certainly not worthy of production code and, hopefully, is merely the result of someone somehow using either a find/replace or other code automation within Visual Studio and missing the fact that they’ve got two private members in the class of the same name which won’t get assigned because the parameters will simply overwrite themselves used this way.

Another fun pattern that was uncovered on the same occasion as this gem was the more familiar anomaly….the anonymous re-thrown exception.  It looks something like this in C#:

   34     public static class SampleExceptions

   35     {

   36         public static void MyException(Exception sampleEx)

   37         {

   38             try

   39             {

   40                 int x = 10;

   41                 int y = 0;

   42                 int z = x / y;

   43 

   44             }

   45             catch

   46             {

   47                 throw;

   48             }

   49         }

   50     }

 

As you might have guessed from lines 45 through 48, the static method MyException(Exception) is gonna throw a division-by-zero exception; but since it’s technically handled by a catch (error handling) block an unspecified exception will percolate “upwards” — back to the caller.  Some developers make the mistaken assumption that error reporting is unaffected using this approach, but in fact checking-in such code makes troubleshooting all the more difficult if anything ever goes wrong here.  Consider the stack traces returned by this block and other, properly-structured try-catch block:

Unhandled Exception: System.DivideByZeroException: Attempted to divide by zero.
   at RossReport.Samples.BadEquivalency.SampleExceptions.MyException(Exception sampleEx) in C:\Users\holderr\My Code\BadEquivale
ncySample\BadEquivalencySample\Behaviours.cs:line 49
   at RossReport.Samples.BadEquivalency.Program.Main(String[] args) in C:\Users\holderr\My Code\BadEquivalencySample\BadEquivale
ncySample\Program.cs:line 18

The exception cited above was unhandled, which is bad, yes. But you’ll also note that it claims the failure point is line 49 on which the only logic visible to us is a little close curly brace character (“}”).  In other words, our stack trace is useless because on unhandled exceptions of this sort, the .NET Framework has no means of reporting where the error occurred and one is left to but guess at where the wrench, much less the monkey went.

A properly-written catch block with at least a Debug or Trace statement serving to output the exception message and/or stack trace to a file or even the console window is much more useful:

   46             catch (Exception ex)

   47             {

   48                 Trace.WriteLine(String.Format("***** Exception occurred: {0}", ex.Message));

   49                 Trace.WriteLine(String.Format("STACK TRACE:\r{0})", ex.StackTrace));

   50             }

…gives us:

***** Exception occurred: Attempted to divide by zero.
   at RossReport.Samples.BadEquivalency.SampleExceptions.MyException(Exception sampleEx) in C:\Users\holderr\My Code\BadEquivale
ncySample\BadEquivalencySample\Behaviours.cs:line 43)

NOTE: The above includes an appropriately-configured trace listener in the .config file of the project hosting your code.

An even better and more popular solution for exception handling in all sorts of applications can be found in the Logging Application Block of the Microsoft Patterns & Practices Enterprise Library (a.k.a. EntLib), which is available on CodePlex.

(Strangely enough, the application I mentioned was the inspiration for this article uses the EntLib extensively….)

And apologies if anyone finds a sanctimonious subtext to my commentary here; the purpose of the article isn’t to malign anyone’s coding skills, God knows.  (Such was not my intent even when citing a couple of unnamed developers who found themselves stymied when entering a dot on a blank line in the VB editor didn’t summon intellisense….an activity that normally requires one be within a “With” clause.)  Instead, it’s a sincere effort to help fellow developers avoid causing their peers unnecessary cases of “code shock”; especially when delivery deadlines loom.

Happy coding!

P.S. You can the sample project used to create source code for this article here.

Agile SCRUM A Good Starting Point

06-Jul-10 03:45 pm EDT Leave a comment

 

I

subscribe to a number of informative online newsletters on varying IT topics and this morning found a great white paper on introducing software development methodologies to a software development team.  In it, there’s a comparison with other methodologies so the experienced developer can relate the concepts to those they’re measuring to adopt: 

I am familiar with Agile SCRUM, or at least a couple attempted implementations of it.  I say “attempted” because what one inevitably finds is that practices get shaped and moulded to suit the particular team or teams which use it.  But everyone needs a starting point, and this paper offers a needed perspective in that process of tailoring from the perspective of adopting Agile SCRUM and going from there.

.NET Framework v4.0 (RC) Blends WF/WCF Further

25-Mar-10 01:28 am EDT Leave a comment

I haven’t played around with it much (God knows I haven’t had the time lately).  But this article presented a decent summary of some of the features that are coming down the pipe for developers of WCF & WF in the upcoming release of Microsoft’s .NET Framework v4.0…

 

Automatic State Transition Using the State Machine Workflow 3.5

23-Mar-10 12:35 am EDT 3 comments
Figure 1: A small Microsoft Workflow Foundation (WF) 3.5 State Machine diagram with 2 states; the eventDrivenActivity1 contains a SetStateActivity to a second state, State_Y.
W

hen recently asked to create a State Machine Workflow (using Microsoft’s .NET Framework v3.5) to reflect an existing business process with operations in some states that wouldn’t map to the event model supported in Microsoft’s model, I posted a couple of design questions to the MSDN Forums, but didn’t get an answer (which was unusual).  What designs wouldn’t fit?  Well, one big problem was that to go from one state to another, the designer seemed built on the idea that an external event, typically a user-driven event of some sort, was responsible for executing code after a state change was made.  But in my business process, about half the state transitions were totally systemic in nature — in some cases, state transition loops could even occur that needn’t involve any external actors; neither users nor system actors.  Another big problem was that while within a given event (the main container for activities in a State Machine workflow), the exposed WCF service method could, of course, be associated easily with a ReceiveActivity.  But as soon as a SetStateActivity was processed, regardless of its placement in the event container, the Response object would be returned to the caller leaving any other logic in, say, the StateInitializationActivity of another state (being transitioned to), processed asynchronously.  This made feedback to the calling application awkward.

Okay, I thought, well there must be some way to do this.  Perhaps we need to simply have one service in the State Machine call another.  Of course, that approach failed miserably.  In the WF 3.5 platform, the State Machine workflow is itself an instance, which, once instantiated, couldn’t be accessed by making another call through WCF back to itself.  Thus usage of the SendActivity to handle transition between states wouldn’t work within a State Machine itself (though it worked beautifully when engaging other workflows, which is what it was designed for).  Then I thought of building custom activities that derived from IEventActivity which could respond to calls from within the State Machine.  But I couldn’t seem to get this working as the WorkflowQueuingService wouldn’t cooperate with my attempts to integrate the needed design.  Then it occurred to me that the easiest solution might involve leveraging the SqlWorkflowPersistenceService.

The SqlWorkflowPersistenceService in workflow is designed to facilitate tracking state over long periods of time; or at least longer than you normally want to have your workflow reside in memory, vulnerable to service outages or, more likely, other instances of it just gradually gobbling up system resources over the course of its normal period of execution.  The State Machine workflow could have a brokering class accept a method call from a client perform a given state transition using the SetStateActivity, dehydrate by means of a workflow service method and then be immediately rehydrated again afterward by another workflow service method which would continue the workflow as desired all the while holding on to the Request object until some condition was set to release it.

Figure 2: A UML sequence diagram indicating how an object leveraging the WCF service proxy can broker calls to the workflow services.  Thus, when invoking the SqlWorkflowPersistenceService, state transition and logic within the second state may be called immediately without waiting for user input or another actor to trigger an event.

Naturally, it is important that within the eventDrivenActivity1 container illustrated in Figure 1 (above) that there be at least one activity decorated with the PersistOnCloseAttribute and that the SqlWorkflowPersistenceService be appropriately configured with the state machine workflow.  Also some mechanism must be created within the object invoking the WCF Service proxy which is likewise accessible to the State Machine workflow itself, so the caller knows when the response may be returned to the client.  One option could be a static member of the workflow runtime hosting class (which can optionally override the default WorkflowServiceHostFactory class typically called in the .svc file accompanying a HTTP-based workflow service).  Another obvious option is a value stored in a database; such as one that might be responsible for aggregating and/or displaying messages from the workflow service back to a client application.

To some, the solution described in this article may seem a logical and obvious solution to the problems I outlined at the beginning, but when confronted with a new API many developers (myself among them) are unsure of what design models work in an otherwise new and unknown paradigm.  The examples I reviewed prior to writing this article were those that fit a narrow band of user-driven approaches so I thought it important to review an alternative publicly.  After all, not everyone builds their business processes according to the same philosophy and in the model I was presented with, making WF a workable solution with some guidance on how to call logic in other states would have reduced an already heavy learning curve.

Please feel free to add questions or comments or contact me by following the instructions on my blog’s main page.

A Quick ASP.NET Web Parts Tutorial Using SQL Server 2008

25-Jul-08 11:26 pm EDT Leave a comment
This treatise on developing a web portal is authored by a well-known Microsoft MVP named Omar Al Zabir (blog), CTO of PageFlakes, a portal software company.  DropThings is the OSS version of their title product, and is intended as a teaching tool, although there are clearly opportunities to extend it into many other applications.

After doing a fair bit of work with Microsoft SharePoint 3.0 / 2007 of late, I decided I wanted to expand my personal website beyond the capabilities offered at Live Spaces.  Specifically, I wanted my personal website to offer some of the same capabilities that are available with SharePoint but couldn’t use the SharePoint platform very easily at home since, of course, neither WSS or SharePoint work on non-Server Windows platforms.  (Plus, even while I work for Microsoft, there may be issues with my setting up a developer-license instance of the software to run a quasi-personal application online – issues I’d rather not have to worry about.)  So I decided to locate another “light-weight” portal alternative to host applications and information the way I wanted.  Much to my shock, there is a fairly mature open-source project with an accompanying book that’s been recently published that instructs you on how to go about building your own web portal – from scratch!  (Well, if you really want to…which I don’t.)  The book is called “Building a Web 2.0 Portal with ASP.NET 3.5“, which is very apropos.

Because SharePoint is out-of-scope for this little project, I’d have to review my understanding of ASP.NET Web Parts once more (as I hadn’t used them in a while).  Web Parts were integrated into the .NET Framework 2.0, so they’ve been around for a couple of years.  As the name suggests, they offer a means for web developers to isolate functionality into a smaller region of a single web page so that other functionality may be displayed alongside – exactly as would occur in a web portal. However, there a many options one can configure which gives Web Parts a lot of power and potential, but also generates a bit of a learning curve.  And this is another case where the MSDN doesn’t get you there especially fast.  This is, in part, the motive for me writing this article; although it’s also my intent to get you, the reader, to understand the context of each step I’m walking you through – but while focused on the goal of getting a couple of web parts going in as few steps as possible.

Step 1: Create & Configure the Personalization Database

The first hurdle one needs to clear is dealing with the fact we’re not using SQL Server Express.  It seems Microsoft is trying to get SQL Express running everywhere – even on desktops fully equipped with the much more sophisticated and powerful Microsoft SQL Server 2008 (or 2005 if you’ve yet to upgrade – these instructions should work for the elder database too).  Web Parts are a technology that are almost entirely intended to work with a .NET Personalization Provider, which in turn is associated with .NET Membership and Internet Information Service (IIS) site providers.  Indeed, you can get a peek at these various IIS web site options presented in the new IIS 7.0 management console on Windows Vista or Server 2008 by clicking on a web site and selecting “Providers” (ASP.NET area).  The figures below demonstrate how to access the different providers from within the IIS management console.  In fact, the one we’re looking for likely won’t be there – we have to create it, and the database where it will store its data.  And, while it’s possible for us to build a new provider programmatically using the instructions given us by the good folks at MSDN, Microsoft has generously provided us with a tool called “aspnet_regsql.exe” along with the .NET Framework 2.0 tools which greatly abbreviates this process:

Each IIS 7.0 website can be configured using a control panel-like window that serves as the configuration tool for Microsoft’s latest web server.  (Click image to enlarge.) Providers come in several flavours, including the 3 identified in the drop-down menu depicted above.  But there’s still more; and although the provider needed to leverage SQL Server 2008/5 is of the same type as the Users provider highlighted here, it must be registered through insertion into the <webParts> block in web.config.  (Click image to enlarge; read below for more details.) After running aspnet_regsql.exe, a configuration wizard appears – displaying the very screen depicted in this image (click to enlarge).  The wizard will prompt you for details about the name of the database server and instance, along with security options.  It concludes by giving the connection string to add to web.config along with some final instructions.

Step 2: Create & Configure Host Web Application

After completing the wizard, we’ll need to get a page with some web parts ready.  This means we’ll need to create a good, ol’ fashioned ASP.NET web application.  (This can be most easily achieved using Visual Studio 2008 or Visual Studio 2008 Express Edition.¹)  We can use Default.aspx to start with, which we’ll use to play host to a WebPartsManager control.  Below is a sample of the control’s configuration in the HTML source for the page:

    1     <asp:WebPartManager ID=”zoneManager” runat=”server”>

    2         <Personalization ProviderName=”SqlMembershipProvider” />

    3     </asp:WebPartManager>

Listing 2.1: WebPartManager Control HTML Source (Default.aspx)

Of course, the ID property can be modified to suit your own naming preference.  As can the ProviderName of the Personalization tag the WebPartManager encapsulates.   However, it is important to keep the name of the Personalization Provider, as you’ll need to add this to the web application’s web.config file.

Figure 2.1: Add Web Site Dialogue (click to enlarge)

Figure 2.2: Add Web Site Connect As dialogue (click to enlarge), used to configure security – required for Web Parts to operate correctly.

But before editing web.config, we should first take a moment to make sure IIS has assigned a web site and application pool to the folder this application has been created in.  Visual Studio doesn’t do this for you automatically anymore, by default.  So return to the IIS Management console, as discussed in Step 1 and simply create a new IIS web application.  On both IIS 6.0 and 7.0, you’ll be prompted with a screen similar to that depicted right (Figure 2.1; very similar in the case of IIS 7.0).  Of course, you’ll want to select a unique name and app pool for the new site, in addition to a unique port number.  You’ll also note the button highlighted in the image, labeled “Connect as…” – which results in a small dialogue (Figure 2.2) that allows one to configure authentication for the new site.

Consideration of security is important when configuring any application using web parts.  Although one can certainly have web parts without enabling personalization, they aren’t anywhere near as useful without it for the simple reason it isn’t possible to retain any information about any of the web parts beyond a single user session otherwise.  However, in the interests of simplicity, I’ve configured this sample application to use Windows authentication (i.e. using the NTLM) so that the current Windows account will be acquired from the browser when the user hits Default.aspx and passed on through both to the .NET runtime and the SQL Server database.  In a production environment, you’d have to consider using a service account and adding it to the dialogue in Figure 2.2 and then read up a little further in the MSDN documentation to figure out how to use other authentication models, such as Basic (forms) authentication, or whatever.  All of this configuration metadata would be placed in the web.config and, ultimately, added to the data in the membership database.

The final security setting we need to concern ourselves with here involves choosing the authentication model(s) the application will support.  For veteran web developers reading this, it’s probably obvious that we need to eliminate the Anonymous Authentication option if we want our credentials passed through the NTLM – but it’s something that can be forgotten if one is removed from web development from too long.  The error messages (if any) that can result from overlooking this setting may also not point to this being the trouble immediately – if anonymous (guest) credentials are used, after all, the application may work to an extent, but personalization features would not.

It is here we’ll also want to enable the Windows Authentication option and ASP.NET impersonation.  The ASP.NET impersonation may also be useful in he scenario described earlier involving the creation of an NTLM service account for your own custom web parts application but, again, for this example we’ll be sticking with simply enabling these two which means the user you’re logged in as when you hit Default.aspx with your browser will be the credentials used to access the database and any other resources.

So, having setup our new IIS web application and having created the application project in Visual Studio, it’s time to move on to the web.config itself.  Below is an excerpt of the pertinent blocks from the web.config of my sample web parts application:

   48     <authenticationmode=Windows />

   49

   50     <webParts>

   51       <personalizationdefaultProvider=SqlMembershipProvider>

   52         <providers>

   53           <addname=SqlMembershipProvidertype=System.Web.UI.WebControls.WebParts.SqlPersonalizationProvider, System.Web, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3aconnectionStringName=SqlServicesapplicationName=WebPartsTrial />

   54         </providers>

   55       </personalization>

   56     </webParts>

Listing 2.2: WebPartsTrial web.config, WebParts Personalization Provider blocks

On line 48, you’ll notice we’ve gone with Windows authentication, making our application use the same security preference as that which we’ve recently setup in IIS.  Then we create the registration for our web parts’ Personalization Provider.  Unfortunately, I’ve selected the name SqlMembershipProvider for the name of this web parts Personalization Provider instance.  This might seem a bit confusing if you read some of the MSDN documentation concerning ASP.NET providers because there is a .NET class of that same name.  But that class has no relevance to our efforts here, so just try to keep in mind that it’s just a name for our provider here and not get it confused with anything else.

Of course, it’s certainly possible to have other providers defined here, but they would be redundant (i.e. storing our data in two different places at once).  And it’s possible to store personalization data anywhere you want – instructions exist on the MSDN website detailing how to create a custom provider.  Thus instead of using a SQL Server database you could use a MySQL server, and XML file, an MS Access database – or even an MS Excel spreadsheet if you wanted to be really adventurous.  Of course, such endeavors are beyond the scope of this article.

There are two other attributes in the listing above of note; the first is the applicationName attribute, which simply identifies the application token in the membership database.  This is needed because it is quite possible to have more than one application use the same membership data, which is what you might want to do if you wanted to maintain the existing credentials and preferences for several applications.  The typical approach is to have discrete membership databases (containing discrete settings) for each application, notwithstanding perhaps scenarios where a suite of applications shared by the same users form a core tool set or other special scenarios of the like (more common in larger organizations).

The other attribute is our connectionStringName.  As any web.config with a data-driven application needs to have one or more connection strings instructing the Windows ODBC drivers how to connect to them, our Personalization Provider simply needs the name of the string used to connect to our SQL Server database instance.  The listing below is the one used with my sample web parts application:

   25   <connectionStrings>

   26     <addname=SqlServicesconnectionString=Data Source=.;Initial Catalog=WebPartsTrial_Membership;Integrated Security=True />

   27   </connectionStrings>

   28   <system.web>

Listing 2.3: WebPartsTrial web.config, Personalization DB Connection String

Thus, the name of our connection string, “SqlServices“, specified on line 26 is the name provided for the connectionStringName setting on line 53 of the pervious listing.  (Note: both listings are from the same web.config file.)  Of course, the connectionString attribute itself (also on line 26, above) is a fairly typical example of an ODBC connection string for a SQL Server database instance called “WebPartsTrial_Membership”, hosted on the same machine as the IIS web server the web.config is stored on, using integrated NTLM security and thus the account credentials of the user hitting Default.aspx or any other web pages constituting the web application associated with that web.config.

This concludes the basic configuration steps and coverage of rudimentary options for web parts applications.  Now, if we refer back to Listing 2.1 we can readily see what the appropriate response is for the ProviderName attribute; and how that name refers back, ultimately, to the membership database itself.

Step 3: Building Web Parts – The Easy Way

THIS SECTION IS STILL UNDER CONSTRUCTION – TO BE CONTINUED…

________________________

¹ There are several components to Visual Studio Express (VSE), including Visual Web Developer 2008 (Express Ed.).  Because each of the VSE tools are semi-independent, rather than integrated as the commercial tool, some of the basic features may operate differently.  This outline was written using Visual Studio 2008 Team Suite (VSTS), which includes an amalgam of Visual Studio Development, Test, Architecture and Database editions.

A continually-run D&D campaign, since 1982.

A continually-run D&D campaign, since 1982.

Elite: Dangerous

Creating a Universe

tothebreach

Breaching the barrier between PC and Console.

Terry Glavin

CHRONICLES

Techno Manor

Geek's Corner

VM.Blog.

an IT blog.. and an occasional rant

Yammer Site Status

Is Yammer down? Offline? Broken? Undergoing scheduled maintenance? When will it be back? Find out here.

jalalaj

A journey full of wonderful experiences

Azure and beyond

My thoughts on Microsoft Azure and cloud technologies

TechCrunch

Startup and Technology News

Ottawa Citizen

Ottawa Latest News, Breaking Headlines & Sports

National Post

Canadian News, World News and Breaking Headlines

Targeted individuals's

One Government to rule them all.

Joey Li's IT Zone

Everything about IT

jenyamatya

Unravelling the magik of code...

The Bike Escape

Because Cycling is Life

%d bloggers like this: