Author: mfoster

  • SWFObject flashvars, params, and attributes

    I have been working with SWFObject a bit lately and was a little confused by how the flashvars, params, and attributes arguments for the embedSWF() function differ from each other and what exactly SWFObject did with them. Based on the documentation, I did a little experiment where I used each and then viewed the generated source using the Web Developer Toolbar for Firefox.

    Here is my HTML and SWFObject JavaScript which uses the dynamic publishing method as described in the SWFObject Documentation:

    
    
    
    
    
    
    
    
    Embed Fail

    Here is what the generated source looks like after I load the page in Firefox and SWFObject has done its thing:

    
    
    
    
    
    
    
    
    

    As you can see the attributes are just attributes to the object tag. The parameters show up inside of param tags with one param tag for each param following the object tag. I would be curious to hear if anyone has used the “attributes” and “parameters” arguments when embedding Flash with SWFObject and for what.

    The flashvars argument is the most useful in my case because that is how you can get external data passed into the SWF. I think a lot of folks load in their data into a Flash movie using a separate call to load an XML document. That is the way to go if you are pulling a large amount of data dynamically based on some type of user input.

    If you only have a few name value pairs that are not going to change, passing these in during the embed using flashvars is probably the better option. Assigning these values during the embed will save a second round trip you would normally make to get an XML document.

    So let’s say you decide you want to pass in a couple key-value pairs to a Flash movie using SWFObject. In the code example above there are two key-value pairs we assign to flashvars: “my_flashvar1” and “my_flashvar1”. Now you will want access these two inside your ActionScript code.

    For ActionScript 2 it would look something like this:

    trace(_level0.my_flashvar1);
    trace(_level0.my_flashvar2);
    

    ActionScript 3 requires another line of code to get to the same place:

    var paramList:Object = this.root.loaderInfo.parameters;
    trace(paramList["my_flashvar1"]);
    trace(paramList["my_flashvar2"]);
    

    A word of caution: When you use SWFObject’s embedSWF() function there are quite a few optional parameters. Be sure that you put a “false” in for optional parameters you don’t intend to use before any parameters you ARE going to use. In the embed example below I want to use the flashvars option but I don’t want to use the option to specify an express install file so I put in a false in that spot. Also, just be careful in general because there are a total of 10 different arguments for the embedSWF() function (including all the optional ones) so it is easy to get them in the wrong order and such.

    swfobject.embedSWF("Test.swf", "embedhere", "300", "250", "9.0.0", false, flashvars);
    

    Take a look at the SWFObject documentation for a description of each parameter.

    That’s it!

  • Must-have web application development tools

    Must-have web application development tools

    I have come up with a list of my “must-have” development tools:

    1. Dual Monitors – Developing with two monitors will make you much more productive simply because you spend less time switching between windows all day if nothing else. Monitors are pretty cheap and the productivity gains will more than pay for a dual output video card and second monitor. Even Microsoft says so.
    2. Firebug – If you do any kind of web development you should have Firebug on your tool belt. This Firefox Add-on will tell you exactly what CSS properties are being applied to an HTML element and from where and then allow you to change those properties on the fly in the browser. It also offers JavaScript debugging, a DOM tree inspector, and last but not, least, a “Net” panel that allows you to see all your browser requests, responses, and times.
    3. IE Developer Toolbar – IE’s answer to Firebug. It is not quite as full featured as Firebug in my opinion but it does at least allow you to inspect an element and determine how styles are being applied to it and where. This is quite useful since IE has a different box model than Firefox and you can use all the help you can get when trying to make a site look the same in both browsers.
    4. Charles Web Debugging Proxy Application – Charles acts as an intervening proxy to your web browser that records all the requests and responses. Some of this functionality overlaps with Firebug but Charles goes a bit further by providing request breakpoints, request editing, throttling, and DNS spoofing.
    5. Firefox Web Developer Toolbar – This is another very useful Firefox Add-on that allows you to:
      • Display element attributes in-line with the page you are viewing.
      • View a page’s JavaScript generated HTML.
      • Resize the browser window to preset sizes.
      • Outline different types of elements in the page.
      • Quickly disable, enable, and delete cookies
      • The list goes on…
    6. JQuery – JQuery is a JavaScript library but I also consider it an important tool to make JavaScript programming less painful. It allows you to easily select a DOM element you want to manipulate without typing a whole lot and handles many of the browser idiosyncrasies. Using JQuery’s selectors, you can easily change attributes and chain those changes to together. JQuery also has shortcuts for event handling, effects, AJAX, rich UI components, and anything else that is repetitious, boring, or aggravating to do in plain JavaScript. It takes a little time to learn JQuery but it quickly pays off. While there are many other JavaScript libraries available, I would say JQuery has become the most popular and so does this very scientific poll ;). If JQuery or one of its hundreds of plug-ins don’t do what you need then JQuery will work very well along side some of the other popular JavaScript libraries such as Dojo, Prototype.js, Ext.js, Mootools, and YUI.
    7. Putty – If you need to connect to your web host via Telnet or preferably SSH, Putty is tough to beat for the price.
    8. WinSCP – WinSCP provides SSH file transfer for Windows machines. If you use Windows on your desktop and a Linux host, this is one of the best ways to upload your files.
    9. Notepad++ – If you want a basic, lightweight text editor to do your coding with color syntax highlighting Notepad++ is a great choice. Even if you use a full featured IDE for your development I find it handy to have a good text editor handy. Notepad++ has several plug-ins available as well.
    10. PHPUnit – If you are developing a web application that you think will be around for any length of time then unit testing is a critical time saver. Chances are you already write tests to check if your code is working if you don’t have a user interface built yet so often you are already committing the time to writing tests. Unit tests developed using a unit test framework allow you to make “assertions” regarding the output of each function or small “units” of code. Unit tests stick with your code throughout it’s life cycle and are usually executed before you check-in a new change. When you run your unit tests using the unit test framework’s executable you can generally test just a single class or your entire code base. This will quickly tell you if the code you just wrote is working correctly and if you have inadvertently broke any existing code elsewhere. For current job my team and I code in primarily PHP and JavaScript so we use the PHPUnit and JsUnit frameworks but there are unit testing frameworks available for nearly all popular programming languages including C++, Java, C#, etc.
    11. Selenium – While unit testing covers individual units of code, integration tests cover how everything works together. Selenium accomplishes this by allowing you to build scripts that you can playback to emulate a user’s browser interacting with your application. Like unit tests, you can make assertions that elements in the web interface are working the way they should. This allows you to effectively perform an automated regression test of your application so you can make sure your code changes didn’t break any interactions between the components of the application. Although I personally really like Selenium, there are other good automated test tools such as Watir
    12. Web Application Vulnerability Scanning Software? – This is an area that will strongly depend on your budget. I think vulnerability testing is essential but short of doing a lot manual probing and experimenting, it is going to cost you. Even if you have read the OWASP guide back and forth and are careful to escape all your inputs, I still think it is important to run a test tool against your app before you release it into the wild. Although automated testing won’t reveal all your security issues it will at least reveal some of the more embarrassing ones. If nothing else, it is important that the script kiddies don’t find anything interesting when they do the same. While there are many open source security tools, I am not aware of any that will do automated application security scanning like HP’s WebInspect, IBM’s Rational AppScan, or Acunetix WVS. I found an interesting comparison between the three here. I would be interested in any alternatives if you know of any.
    13. Security Web Sites – The following web sites are good security “tools”:
    14. WebLOAD? – Instead of hoping your web application will hold up to high traffic volumes, wouldn’t you like to really know for sure? A load testing tool is essential to predicting how your application will behave under load and will also help identify bottlenecks in the application that can be optimized. Optimizing your application based on load test data will help to ensure you are focusing your optimization efforts on the real problem areas. In my current position we were using the alleged open source version of WebLOAD, which, worked pretty well. Unfortunately Radview is making new versions, err… I mean, pro, closed source and I am guessing the licensing costs are ridiculous just by the fact that they don’t list the price on their website or provide a shopping cart so you have to contact their sales folks (I am not a big fan of this practice). I am fine with convincing my boss to buy software if need be so long as it doesn’t cost so much he will laugh at me. So… I am in interested to hear about any other load test tools you have experience with.
    15. Subversion – Version control is essential if you are working on a project for any length of time and especially with other people. Subversion is easy to setup and as long as you back up your Subversion server and make frequent commits, your code will be safe and you can always revert to a previous version if you really mess it up. CVS is also an option but I prefer Subversion because it maintains versions across the entire code base instead of just individual files. This allows you to easily revert a bad multi-file commit. There are plenty of clients available for Subversion and many development tools have subversion support built-in or provide plug-in options.

    Well that’s it for this post. I would really like to hear about any other must-have web app dev tools that you think should be on the list.

  • Review: Herbie the Mousebot Kit by Solarbotics

    Review: Herbie the Mousebot Kit by Solarbotics

    img_7866

    I wanted a quick, easy to build robot kit to get back into electronics and robotics. I ask for a Herbie the Mouse Bot for Christmas and sure enough I got one. It was a fun kit to build and went together in a little over an hour.

    You start off with a PC board…
    Herbie the Mousebot by Solarbotics

    … and a handful of components…

    Herbie the Mousebot by Solarbotics

    You break apart the PC board which serves as a PC board and a body for the mouse which, is pretty cool.

    Herbie the Mousebot by Solarbotics

    The PC boards join together via several solder joints. Tape is used to keep everything together until you are done soldering. A smaller board that holds the 9 volt battery connector helps keep the three main sides together. By the time you are done soldering all the joints it is a pretty sturdy little robot.

    Herbie the Mousebot by Solarbotics
    Herbie the Mousebot by Solarbotics
    img_7855

    The whiskers and tail activate a relay when bumped so the mouse will backup to avoid getting stuck. I taped the relay down while I soldered it in.

    img_7859

    Herbie with the photo diode “eyes” installed….

    img_7861

    Herbie just about finished…

    img_7862
    img_7866

    Herbie is an interesting robot because it uses a simple analog IC, the LM386, to do something you might think requires a much more complicated digital circuit or micro-controller/processor:

    http://downloads.solarbotics.net/PDF/Solarbotics_Herbie_the_Mousebot_Instruction.pdf

    img_7868
    Herbie the Mousebot by Solarbotics

    We don’t have much open bare floor in the house and it moves quite fast so it was a bit of a challenge to keep it under control without hitting too much:

    The Herbie Kit is well engineered and fun to assemble. I give it five stars and recommend it as a good first robot kit.

  • Review: Weller WLC100 Soldering Station

    5 out of 5 stars

    Weller WLC100 Soldering Station

    I have started to take a renewed interest in electronics again lately and wanted to get a good soldering station to work with. I have a couple fixed wattage irons I use for my RC plane wiring but I wanted something adjustable with a variety of alternative tips available.

    I ordered a Weller WLC100 and it is working pretty well for me so far. I also ordered some smaller conical and screwdriver tips that make it easier to solder smaller components and connections. One of the reasons I went with the Weller is because it is a relatively well know brand and I know I will be able to find tips for it.

    There are more expensive solder stations that have digital controls and displays but I decided that an analog control was adequate. After using the station for a bit I am pretty happy with the analog control. The amount of heat transferred is so strongly dictated by the conduction of heat between the iron and the component/pad that I don’t know that such temperature precision makes much difference for most hobby uses. If you just tin the tip of your iron with a little bit of solder it will make significantly help with the transfer of heat from your iron to the component/pad you are soldering.

    Weller WLC100 Soldering Station

    The Weller iron is easy to grip with my fingers and doesn’t get too hot to handle at all. I built a Herbie the Mousebot Kit with it using a .062″ screw driver tip. It was nice to work with and did the job well. I would definitely pick up some smaller tips if you are going to be soldering smaller circuit boards. The screwdriver tip that comes with it is pretty nice but a small tip affords more precision.

    I give the Weller WLC100 5 out of 5 stars. It is a good, relatively cheap soldering station with many tips available. Buy one and a couple tips to go with it:

    Conical Tip, .031

    Narrow SD Tip, .062

  • How to use the PHP cURL module to make HTTP requests from PHP

    Some of my previous posts talked about making HTTP/web requests from PHP with a focus on the PECL_HTTP request module:

    The last post mentioned above covered using the built-in file_get_contents() as an alternative if you are unable to install the HTTP PECL extension or need minimal HTTP functionality. This post will look at a third method of making HTTP requests from PHP, using the PHP Client URL Library AKA cURL library. The PHP cURL library essentially uses the cURL library from the cURL command line utility and makes the calls available via PHP functions.

    Installing the PHP cURL library on Ubuntu requires just a couple simple steps:

    • PHP needs to compile in the cURL library but if you are using Ubuntu you can simply execute the following shell command instead of doing a custom PHP build:
      sudo apt-get install php5-curl
      
    • Restart Apache once the library has installed.
    • Call a page with the phpinfo() function and look for a “curl” section. It should be listed there if everything installed correctly:

      curlinfo1

    Once the cURL library is added you can call the curl functions which are documented here. The following simple example makes a call to www.example.com. You will notice that I did not “echo” the return of curl_exec() to display it. This is because by default, the curl_exec() function displays the result and returns a true on success, false on failure.

    
    

    If you want to assign the output to a variable so you can do something with it, you will need to set the CURLOPT_RETURNTRANSFER option:

    
    

    The PHP cURL library has a variety of options you can set using the curl_setopt() function. This includes setting GET and POST requests, setting fields for each, etc.

    That is the five minute version of the PHP cURL library. Another quick way to make an HTTP request is to just make a system call to the “wget” command utility which is included on most *nix systems:

    
    

    This pretty cool but I think I prefer the other methods because they all run under the Apache process. That’s it for this post!

  • How to setup and use the Xdebug Extension for PHP Profiling

    A profiling tool can provide valuable information about bottlenecks in your code. I believe profiling is a critical aspect of optimizing because it will tell you about your real code bottlenecks as opposed to your perceived bottlenecks. This enables you to focus your resources on areas that will provide the most performance benefit for your effort. Most programming languages and environments have one or more profiling tools available and PHP is no exception.

    Xdebug is a PHP extension that provides valuable debugging information such as stack traces, functions traces, profiling, code coverage analysis, etc. There is another PHP tool called DBG that has similar functionality but this post will focus on using Xdebug.

    Setup Xdebug

    Xdebug is a PECL module and can be installed using the PECL installation instructions in one of my previous posts.

    • If you have already setup PEAR you just need to run the following from a shell:
      sudo pecl install xdebug
      

      If everything goes well Xdebug should download, build, and install. You may get a message telling you:

      You should add "extension=xdebug.so" to php.ini

      Go ahead and add the line. On an Ubuntu server you will probably find the php.ini file here: /etc/php5/apache2/php.ini

    • Restart Apache, or whatever web server you are using, so the change will take effect.
      If you are running Apache on Ubuntu it would be:

      sudo /etc/init.d/apache2 restart
      
    • Write a phpinfo.php page with the following code and then point a browser at it. It should show the Xdebug module there in addition to many other things:
      
      

    At this point the Xdebug extension should be installed. For more detailed instructions on a PECL extension install see my post: How to install a PHP PECL extension/module on Ubuntu. Note that there is a problem running the Xdebug extension with Zend Studio since it has it’s own debugger.

    Enable Xdebug profiling

    • By default, profiling is disable in Xdebug. Enable it by adding the following entry to the php.ini file:
      xdebug.profiler_enable=On
      

      On on a linux box there is often a php.d or conf.d folder that holds additional .ini file settings PHP will use. On Ubuntu the path for this folder is “/etc/php5/apache2/conf.d”. To prevent further cluttering my php.ini file with Xdebug settings, I created a xdebug.ini file to store all my Xdebug .ini file settings. This file is located in the “/etc/php5/apache2/conf.d” so it is automatically scanned when Apache is restarted.

      Now you might be tempted to enable the profiler in your script using the ini_set() function which, normally allows you to temporarily set a .ini setting for only the current script execution. Unfortunately this does not work. You must set it in the php.ini or a sub .ini file and restart the web server.

    • Restart your web server (I.e. Apache) once you have “xdebug.profiler_enable” set to “On”.

    By default, Xdebug will write a cachegrind.out file to the /tmp folder. It will be named something like cachegrind.out.22373 where the number at the end is the ID of the process that was profiled. If you are using Windows you will probably need to change the default folder. Also by default, this file will be overwritten by each script execution so you don’t have to worry about it getting too big. The output file behavior is highly customizable and a complete list of Xdebug settings can be found here.

    Call your script and display the analysis

    With Xdebug enabled, pull up the page in your browser that you want to profile. If everything is working OK a cachegrind.out file should show up in the /tmp folder.

    There are a couple programs you can use to open and analyze the cachegrind file:

    WinCacheGrind

    WinCacheGrind is not very featured but it will tell you the main thing you need to know, which is where your PHP application is spending its time. Click on the screen shot below to see the full output of my test script:

    wincachegrind screen shot

    The script makes an external call to example.com using a file_get_contents() function. Based on this analysis I might try caching the call results and only make the call at some interval to keep the cache updated. This would eliminate almost 75% of the application’s overhead and is just one example of an easy-to-fix bottleneck that profiling will help identify.

    KCacheGrind

    KCacheGrind does essentially the same thing as WinCacheGrind but it is geared for the Linux desktop and has quite a few more bells and whistles:

    kcachegrind screen shot

    KCacheGrind includes a map feature that graphically represents the percentages of where the test application spent the time:

    kcachegrind screen shot

    KCacheGrind also includes a graph feature with an export option that displays a tree diagram of the linkage between all the includes and functions:

    kcachegrindexport1.jpg

    That’s it for this post. Have fun profiling!

  • Review: RC Wall Climber/Clamber Remote Control Mini Car (Updated)

    2 out of 5 stars

    RC Wall Climber/Clamber Remote Control Mini Car

    Update: Warning

    I have received at least one report of a non-working car and the manufacturer has does not seem to have a web site that I can find to get a replacement. I have downgraded my rating to 2 stars accordingly. If you buy one of these make sure you get it from some place you can return it if it doesn’t work.

    NeweggMall.com recently sent me an e-mail pushing a wall climbing RC car called the Clamber!!! Master-Hand. It is actually listed under RC Wall Climber Remote Control Mini Car but the name on the box is “Clamber!!! Master-Hand” by Top Race R/C Series. Although similar, this is not the same as the Spinmaster Air Hogs Zero Gravity Micro Cars, which are a bit more expensive. It was cheap and cool enough looking that I naturally felt compelled to give it a try.

    How it works

    If you are not familiar with these, they have a vacuum inside that holds them to the wall. The four outer visible wheels are actually fake and just look nice. There are two inner wheels that are not visible (unless you flip it over) that sit against the wall and propel the car.

    The switch on the back has three different modes: off, on without vacuum, and on with the vacuum. This way if you just want to run it on the floor you don’t have to turn on the vacuum and waste your charge.

    The underside has two strips of fabric that sit against the wall to help maintain the vacuum. There are two intake holes on the bottom and 4 slits in the windows on top for the air output.

    The car itself looks pretty cool although the fake tires are a little less than authentic. It comes in three colors: Red, Black, and Blue.

    Performance

    The car does not go too fast but fast enough. It drives similar to a tank because it is actually only using two wheels. To steer, it changes the speed of the wheel on the appropriate side. While you are driving the turn radius is not precise and tends to be a bit large. When you are stopped it will turn on a dime.

    As power starts to run down, the vacuum does not hold the car as tightly to the wall as a full charge so sometimes the drives wheels will start to slip and you have to turn around and go in a different direction to get moving again.

    Run time on the wall is about 7 minutes although performance slopes off and the car will start loosing its traction around 4 minutes. Even after 7 minutes the vacuum was still strong enough to keep the car on the wall. I didn’t time it but I am guessing run time on the floor without the vacuum on would be quite a bit longer. When you get close to 8 minutes the power will cut off before the battery is drained too far. It uses a built in Lithium-Polymer battery which is probably why a charge last as long as it does for something so light. Charge time is about 10 minutes.

    Here are some pros and cons:

    Pros

    • Pretty good amount of drive time per charge (about 7 minutes)
    • Only a 10 minute charge
    • Can rotate on a dime while stopped
    • Fun!

    Cons

    • It will get stuck on even flat, clean surfaces occasionally after the battery has run down a bit.
    • The turn radius between running and when it is stopped is quite different. When it is stop it turns on a dime. When it is running it has a very wide turn radius in some cases.
    • IR controller does not perform well under strong light.

    Conclusion

    Overall I rate the wall climber 3 out of 5 stars. I would give it more stars if the steering were a bit more consistent and it didn’t get “stuck” as often. Overall it was pretty fun but I would say the Microfly is a bit more entertaining just because for about the same price or less, it flies around and that is hard to beat in my opinion.

    Images

    RC Wall Climber Clamber Remote Control Mini Car

    RC Wall Climber/Clamber Remote Control Mini Car

    RC Wall Climber/Clamber Remote Control Mini Car

    RC Wall Climber/Clamber Remote Control Mini Car

    RC Wall Climber/Clamber Remote Control Mini Car

    RC Wall Climber/Clamber Remote Control Mini Car

    RC Wall Climber/Clamber Remote Control Mini Car Charging

    Video

  • Review: Senario NRG MicroFly RC Hovering UFO

     

    Senario NRG MicroFly RC Hovering UFO

    In a previous post I talked about the Senario Alien Microfly a bit and in this post I will provide a full review. I gave a few units to some of my family for Christmas so I have flight reports from them as well.

    The Senario Alien Microfly kit comes with a transmitter and the Microfly itself. The transmitter takes 6 “AA” batteries and also serves as the charger for the Microfly. It is pretty small (see the pictures below) and a lot of fun to fly around the house or office.

    I put in many, many flights. Each flight is about 5 minutes with a 15-20 charge time. The cats gave it a few taste tests but mostly they like to just stalk it as it flies around the living room. 😉 I bought 5 of these for myself and my family for Christmas and all of them worked out of the box. Here is a list of pros/cons:

    Pros

    • It is simple to fly. There is only one control to make it go up or down so you don’t need a lot of experience.
    • Cheap. You will probably find it for $25 or less.
    • Durable. You can’t sit or step on it (it is mostly just foam board) but mine has been through many crashes and even survived a few taste tests by the cats.
    • The flight time is about 5 minutes which I think is pretty good for something so small.

    Cons

    • In very large rooms (gym/church/wharehouse) it can quickly get out of range if it doesn’t have walls for the IR signal to bounce off of.
    • After running through three or four sets of “AA” batteries (rechargable) flight times have fallen off quite a bit although I did get many flights on each set of batteries
    • Fragile. Although it can withstand being bounced off a few walls it is very small and made of foam so you don’t want to leave it someplace where it will be sat/stepped on.
    • No directional flight. It only goes up and down.
    • Charge time is kind of high… about 15-20 minutes per flight

    Conclusion

    Overall I give the Microfly 3 out of 5 stars. I would easily rate it higher if it maintained its power after extended use. I don’t know if the built in battery has just been recharged too many times or if the motor is reaching the end of it’s life since it is so tiny and spins at such high RPMs. Despite this, I would say it is easily worth the price and would recommend it to anyone that enjoys RC toys.

    Images

    Senario NRG MicroFly RC Hovering UFO

    Senario NRG MicroFly RC Hovering UFO

    Video

     

  • How to use the file_get_contents() function to make an HTTP request from PHP

    In a previous post I talked about using the HttpRequest object and functions in the PECL_HTTP extension to make HTTP requests from PHP. In some cases you may be limited to using functionality built into the PHP core. The file_get_contents() function has less features than the PECL_HTTP extension but it is built into PHP 4.3 and up. Here is an example of using it to retrieve the landing page at www.example.com:

    
    

    Someone hit that easy button.

    The file_get_contents() functions as well as many other PHP file functions implement a streams abstraction layer largely conceived by Mr. Wez Furlong. This abstraction layer is what enables many of the PHP file functions to access network resources. Given this functionality “file” seems a misnomer.

    The file_get_contents() function uses an HTTP GET request but what if you want to do a POST without using cURL or the PECL_HTTP extension? Furlong posted an article here on how to do just that.

    This next code example uses the file_get_contents() function again but this time a few options are set first using the stream_context_create() function:

     array(
            'user_agent' => "Mark's Browser",
            'max_redirects' => 3)));
    echo file_get_contents("http://www.example.com", false, $http_options);
    
    ?>
    

    Note that the array passed to the stream_context_create() function can also be used to specify a POST method, which is how Furlong does so in his blog post.

    There is still yet another way to make an HTTP request from PHP that I haven’t covered yet using the PHP built-in cURL functions. I will cover these in a separate blog post.

  • How to make your PHP application check for its dependencies

    The very informative phpinfo() function

    The phpinfo() function displays just about everything you want to know about your PHP installation. It includes info on all your PECL and PEAR modules so it is a quick way to check what’s installed. It will also tell you useful web server information including any query strings you pass. To display all this good info just point your browser at a page on your server that contains the following code:

    
    

    That’s it!

    Automatic dependency checking

    The phpinfo() function will give us a page that displays a lot of info. You can pass it bitwise constants to narrow down the information displayed but what if we want to check for specific items?

    In my humble opinion, when developing software or anything in general, it is a good idea to design things so that the end user will not even need a manual because the user interface is obvious. When something doesn’t work, it should be equally obvious how to fix it.

    If you write a PHP application that others will install and use, it is a good idea to check for dependencies when they try to use the application. This way even if they don’t read your documentation they will quickly know why the software is not working.

    Using phpversion(), PHP_VERSION, and version_compare() to check the PHP verson

    To get the core PHP version you can use either of the following methods:

    or
    "; echo PHP_VERSION; ?>

    The above code should output something like this:

    5.2.6-2ubuntu4
    or
    5.2.6-2ubuntu4

    If you are using Ubunto or some other distribution, you will note that some additional stuff is tack on to the version number (I.e. “-2ubuntu4”). This makes a comparison to your expected version a little tricky but you can use a substr()/strpos() combo to get what you need. There is an easier way to do the comparison though. The version_compare() function is “PHP-standardized” version aware. So we can do something like this:

    
    

    Now you can check the PHP version and notify the user if it is not the minimum required version.

    The PHP function documentation for each function at www.php.net include the PHP versions that contain the function in the upper left hand corner:

    substr function version on php.net

    You can use this to learn what versions of PHP include the functions you are using in your code to help identify your minimum PHP version requirement.

    Using get_loaded_extensions() to check for extensions

    The get_loaded_extensions() function will return an array of PHP extensions that you can use to check if a specific extension is installed. Use it in combination with with the in_array() function to check if the extension you require is loaded. In this example I check if the PECL_HTTP module is installed:

    
    

    You can use the phpversion() function to check if extension is listed and if so, its version. This code example not only checks if the PECL_HTTP module is installed, but also checks it’s version:

    here '.
          ' and install it.';
    } else {
        if (version_compare(phpversion('http'),'1.6.0','>=')){
            echo 'The PECL_HTTP extension is installed and '.
              'version 1.6.0 or higher. You are all set!';
        } else {
            echo 'Please upgrade your PECL_HTTP extension to '.
              'version 1.6.0 or higher. You can download it '.
              'here'.
              '.';
        }
    }
    
    ?>
    

    Use function_exists() to check for individual functions

    So far the methods for checking dependencies have been somewhat broad. They check that the script has a certain version of PHP or extensions installed and that will likely be good enough in most cases. If you really want to be thorough you can also check if specific functions are available using the function_exists() method. In this example I check that the http_request() module, which is part of the PECL_HTTP extension, is there before I use it. If it is not, I use the less featured, built in, file_get_contents() function.

    ' .
            http_parse_message(http_get("http://www.example.com"))->body;
    } else {
        echo 'Using the file_get_contents():
    ' . file_get_contents("http://www.example.com"); } ?>

    Check for include files

    Here is a simple way to check for include files. It doesn’t verify their content but you can at least make sure they are there:

    
    

    Wrap up

    Checking dependencies is an important part of building robust software and hopefully the above techniques will help accomplish that. Even if your end user is a very technical they will likely appreciate a good dependency checking mechanism that quickly tells them whats missing to save them time. If your software will be used by non-technical users you might want to automatically and gracefully downgrade your software feature set instead of generating errors and asking them for something they won’t know how to do. Usability is king!

  • 25 ways to insecurity

    The 2009 CWE/SANS Top 25 Most Dangerous Programming Errors was recently released by CWE/SANS.

    Most of the items are old news but I think it is a good checklist that should be on the boiler plate for web application design documents. By putting security requirements in the software specification and design documents, the project manager can then allocate time and resources to security aspects of development. In addition, it reminds developers to ask themselves if the software is meeting those requirements throughout the development process. This is opposed to thinking about security after the entire application has been written and discovering a fundamental design flaw that will require re-writing a good portion of the application.

    I particularly appreciate that each item on the CWE/SANS list is weighted including weakness prevalence, remediation cost, attack frequency, attacker awareness, etc. No project has an unlimited budget but you can prioritize on where to focus your resources to achieve the most secure solution. Generally it is a good idea to ensure that the cost of defeating an application’s security far outweighs any benefits to be gained from doing so. The cost of defeating an application might include labor time, computing resources, fines, and threat of jail time with a cell mate named Bubba, etc.

    It is quite a challenge to develop secure web applications because generally by their nature they need to accept user input. I believe that it is typically much more difficult develop a secure system than it is to break in to the system given the same number of hours so there is often more burden on the developer. It might take only two or three days to develop a working database driven web application but many additional weeks to harden it against attacks and make it reliable, scalable, and highly available. Including security requirements in the software specification and design is essential to planning and allocating resources.

    Ideally automated tests should be included to continuously test vulnerabilities throughout the life of an application. This way security vulnerabilities introduced by code changes will be detected early in the development process instead of later in production. Automated tests could attempt buffer overflows, sql injections, etc. and could be executed prior to a developer’s check-in or on a nightly cron job that automatically checks out the code and runs the tests against it. Although costly to implement initially, automated security testing will likely pay for itself many times over the course of an application’s life. I plan to talk more about automated testing in future posts.

  • PHP HttpRequest class options and notes

    In a recent post I talked about the PECL_HTTP extension for PHP. In this post I will cover a few of the options you can set for the HttpRequest object and related functions.

    The PECL_HTTP extension allows you to set a number of options when you make a request. Usually you put the options in a key=>value array and pass the array as an argument to the request functions (I.e. http_get(), http_post_data(), http_request(), etc.) or assign the array to the HttpRequest object using the setOptions() method. Here is a code example using the HttpRequest object:

    $http_req = new HttpRequest("http://www.example.com");
    $http_req->setOptions(array(timeout=>10,useragent=>"MyScript"));

    In the above code the timeout was set to 10 seconds and the user agent, which is a request header that identifies the browser to the server, is set to “MyScript”. I am going to cover just a few of the options but a full list of all the request options can be found here.

    timeout

    The timeout option specifies the maximum amount of time in seconds that the request may take to complete. Set this too high and your HTTPD process that PHP is running in could be stalled for quite a bit waiting for a request that may never complete. If you set it too low you might have problems with sites that are just slow to respond. This may require some tweaking depending on what you are doing. If you are making requests to a web server in Taiwan you might want to set the timeout a bit higher. The default timeout does not appear to be documented in the HttpRequest options page on PHP.NET (that I could find) but if you look at the http_request_api.c file in the HTTP_PECL source code, it looks like it is 0L AKA nothing:

    HTTP_CURL_OPT(CURLOPT_TIMEOUT, 0L);

    This indicates it will wait forever unless you explicitly set a timeout so it might be a good idea to set one! I put together a page that makes an HTTP request and then another one that will sleep for some number of seconds that I can test against.

    Here is the code for the page that will sleep:

    <?php

    echo "Sleeping.";
    sleep(30);

    ?>

    Here is the code for the page that will make the HTTP request:

    <?php
    $http_req = new HttpRequest("http://localhost/alongtime.php");
    $http_req->setOptions(array(timeout=>10, useragent=>"MyScript"));
    try {
        $http_req->send();
    } catch (HttpException $ex) {
        if (isset($ex->innerException)){
            echo $ex->innerException->getMessage();
            exit;
        } else {
            echo $ex;
            exit;
        }
    } //end try block
    echo $http_req->getResponseBody();
    ?>
    

    When I pull up the page that makes the HTTP request in my browser I get the following error:

    Timeout was reached; Operation timed out after 10000 milliseconds with 0 bytes received (http://localhost/alongtime.php)

    If I don’t set the timeout option at all, the page responds 30 seconds later since it will wait forever or at least the 30 second sleep time on the target page.

    connecttimeout

    The connecttimeout option indicates the maximum amount of time in seconds that the request may take just connecting to the server. This does not include the time it takes for the server to process and return the data for the request. This option will have the same considerations as above although the number should be considerably lower since it is only the connection timeout and not the timeout for the whole request. Again, the default value is not documented but if you look at the http_request_api.c file in the HTTP_PECL source code, it looks like it is 3 seconds:

    HTTP_CURL_OPT(CURLOPT_CONNECTTIMEOUT, 3);

    dns_cache_timeout

    One of the interesting features of the HTTP_PECL extension is that it will cache DNS lookups. Some of the Windows operating systems do this but many of the Linux distributions do not by default. By the way, if you want to clear your cached DNS lookup entries on a Windows box use the command “ipconfig /flushdns”. If you are making multiple requests to the same site, DNS lookup caching should provide a significant performance advantage because a round trip to the DNS server isn’t required for every request. The dns_cache_timeout option sets the number of seconds that will pass before the cached DNS lookup results will expire and a new DNS lookup will be performed. Again, the default value is not documented but if you look at the http_request_api.c file in the HTTP_PECL source code, it looks like it is 60 seconds which is probably fine for most applications:

    HTTP_CURL_OPT(CURLOPT_DNS_CACHE_TIMEOUT, 60L);

    redirect

    The redirect option determines how many redirects the request will follow before it returns with a response. The default is 0 (this IS documented), which may not work in many situations because some applications respond with one or two redirects for authentication, etc. If you set this too high your application may get bounced around too many times and never return. I have not tried it but you could probably put someone in a redirect loop. Anyway, a value of around 4 or 5 should be adequate for most applications I would imagine.

    useragent

    The useragent option allows you to specify a different User-Agent request header to send to the server than the default which is “PECL::HTTP/x.y.z (PHP/x.y.z)” where x.y.z are the versions.

    I made a little one-liner test page that returns the user agent info sent to the server:

    <?php

    echo $_SERVER[‘HTTP_USER_AGENT’];

    ?>

    If I make an HTTP request to this page using the HttpRequest object without setting the useragent I get:

    PECL::HTTP/1.6.2 (PHP/5.2.6-2ubuntu4)

    If I do something like this:

    $http_req->setOptions(array(timeout=>10, useragent=>"Mark’s Browser"));

    I will get:

    Mark’s Browser

    The reason I bring this up is because some applications that you might make a request to may respond different depending on your user agent setting. In some cases you may need to spoof a specific browser to get what you are after.

    Conclusion

    As mentioned before, there are many more HttpRequest options. I just covered a few notable ones that I have some limited experience with.