• 博客园logo
  • 会员
  • 周边
  • 新闻
  • 博问
  • 闪存
  • 众包
  • 赞助商
  • Chat2DB
    • 搜索
      所有博客
    • 搜索
      当前博客
  • 写随笔 我的博客 短消息 简洁模式
    用户头像
    我的博客 我的园子 账号设置 会员中心 简洁模式 ... 退出登录
    注册 登录
新能源汽车行业用户产品设计
新能源汽车行业、面向C端用户的产品设计
博客园    首页    新随笔       管理     

Enhance your (page) performance!

Sluggish internet speeds may be a thing of the past, but instant page loads are still the stuff of the future. Christian Heilmann has some tips for delivering faster, smoother pages to your visitors today One of the biggest obstacles to tackle in web site and web application design is the initial response time of the product. There is a common feeling among web users that things just don’t happen fast enough.
非原创,来源网络。感谢原作者奉献如此精彩文章。原文地址: http://www.thinkvitamin.com/features/dev/enhance-your-page-performance


Sluggish internet speeds may be a thing of the past, but instant page loads are still the stuff of the future. Christian Heilmann has some tips for delivering faster, smoother pages to your visitors today

One of the biggest obstacles to tackle in web site and web application design is the initial response time of the product. There is a common feeling among web users that things just don’t happen fast enough.

Why is this such an issue? Perhaps people who’ve been using the web for years remember the times when we had to pay by the minute (in the way that hotel or airport users do even now), or there could be just a general feeling of being let down by the promised information superhighway. I think in part it’s Hollywood’s fault: in every action flick there are high-resolution, data dense animated web interfaces that show up at the touch of a button, and encyclopaedic data gets loaded and displayed in a matter of milliseconds.

In real life it is simply not the case, because no matter how much you try to streamline your pages, there are always delays. In the case of a web site, apart from the general lag, it is often a cosmetic issue - especially when there are flashes of unwanted content. With web applications it can be more problematic as you have to make sure that visitors cannot activate interaction elements prematurely and break the app.

What makes web sites slow?

Whenever talk comes to the speed of web sites the biggest trick usually advertised is to cut down on the file size of everything (this also leads to endless - and fruitless - discussions about the size of JavaScript libraries [*edit]). In reality, there are many more factors that play a part in the initial response time of a web page:

  • The file size of the HTML document
  • The file size of the dependencies in the document (scripts, images, multimedia elements)
  • The complexity of the HTML (simpler pages are easier to render for the browser)
  • The speed of the connection of the user
  • The speed of third party servers as content may be pulled and included from them
  • The response time of the DNS servers resolving the domains and pointing you to these other servers
  • The responsiveness and speed of the visitors’ computer (how busy is the machine with other tasks - as that impedes on the rendering time of the browser)
  • The responsiveness of the server

These are the technical parts of the equation. Then there is also the human factor. Web pages are considered to be not fully loaded until they show up and don’t “jump around” or “have no loading images”.

Things to do to make web sites faster

There are some well-known general best practices you can follow to overcome some of these technical and human factors and ensure a quick response web site:

  • Optimize all the HTML and dependencies as much as you can without losing quality (this can include stripping the HTML documents of any comments and superfluous linebreaks, which should be part of the publication process. In order to keep sites maintainable you still need those in the source documents)
  • Reduce dependencies by using the least amount of file includes (collate several scripts into one include, use CSS sprite techniques to load all images at once)
  • Make sure that you don’t include third-party content from their servers: set up a script that caches RSS feeds locally and use that one instead. The benefit is not only that you don’t have to deal with the DNS server delays but you are also independent of the other server should it go down.
  • If possible, define dimensions for images and their container elements. This will ensure that the first rendering of the page will be correct and there won’t be any “jumping around” when the images are loading.
  • Include large dependencies such as massive scripts at the end of the document, as this means that the rest of the page gets shown before the browser loads them. Large JavaScript includes in the head of the document mean that the browser waits with rendering until they are loaded.

Best practices vs. special speed requirements

Unfortunately some of these tricks *** with what we consider best practices in web development. Cutting down on the number of included files for example impedes maintainability of the product. In order to make it as easy as possible to maintain the look and feel of a site with different pages (home, articles, archive…) it does make sense to keep the different styles in own includes and only add them to the pages that really use them. You could have one base CSS include and then one for the homepage, one for articles and so on.

The same applies to scripting - keeping methods that do the same job in their own JavaScript includes makes maintenance a lot easier, as you know immediately where to find a certain method without having to scan the whole script. Furthermore, adding scripts inside the body of the document is dirty as it mixes the web development layers structure and behaviour.

Luckily there are technical solutions for most of these problems.

Using single includes for several style sheets or scripts

One solution, written by Edward Eliot, is a PHP script that does the job of collating several scripts or CSS style sheets into a single file. In the case of JavaScript it even cuts down on the size of the script using Douglas Crockford’s JSmin. The script is dead easy to use and will cache the collated file for you until you change one of the files included in it. This means that your files are automatically packed, cached and the include file updated when you change them. You get the best of both maintenance and speed without having to change anything by hand.

Mission almost possible: tackling the onload problem

One other really big issue is that unless you embed your scripts in the body of a document you’ll have to start them when the document has finished loading. This results in a slight delay, and can cause problems.

The delay is caused by the way browsers load, parse and render documents. If you call your scripts with the onload event on the window, all of these following steps will have to be finished:

  • HTML is parsed
  • External scripts/style sheets are loaded
  • Scripts are executed as they are parsed in the document
  • HTML DOM is fully constructed
  • Images and external content are loaded
  • The page is finished loading

In a lot of cases, this takes far too long and needs to happen a lot earlier. Many clever web developers are tackling this issue and every so often a new answer to end the quest for a solution gets released. Most JavaScript libraries have an onAvailable or onDocumentReady event handler that starts the script as soon as parts of the document are loaded rather than the whole lot including images. In practical and admittedly hard-core testing with older browser and operating systems none of them really turn out to be bullet proof though. However, we are all on the case and with luck we’ll get there eventually.

For web applications where a premature activation of elements can result in a failure of the app this is absolutely vital. If your problem is of a cosmetic nature, there might be a workaround.

Avoiding the on-load problem with on-demand pulling of content

Most cosmetic on-load issues are caused by overloading the document with far too much content. This could be massive amounts of text displayed in a tabbed interface or a navigation that is four levels deep. With JavaScript enabled and executed without a glitch, this content can be navigated and displayed in a dynamic fashion and easily digestible chunks. When you turn off JavaScript and see the whole document unstyled it can become a real pain to find your way through it and that is never a good plan. This extra content also adds unnecessarily to the page weight of the initial load.

The solution is to use JavaScript to load the content only when the clever interface can be offered to the user. Users without JavaScript would get a plain vanilla version that only has the most necessary elements and content.

Which techniques you use to pull this extra content will depend on what you try to include. The easiest option is to use a dynamically generated script tag. This is an old trick that was used to pull in large JavaScript data sets or scripts on the fly when the page was loading:


function pull(){
var s = document.createElement('script');
s.type = 'text/javascript';
s.src = 'largeJavaScriptBlock.js';
document.getElementsByTagName('head')[0].appendChild(s);
}
window.onload = pull;

This trick can also be used to include output of APIs that support JSON, for example del.icio.us. As a JSON object is nothing but a chunk of JavaScript you can include this with a generated script tag when the document has already loaded and is displayed to replace an element with this content. The wrapper object Dishy allows you to do this easily. Another example is the unbobtrusive Flickr badge that uses the JSON output of Flickr to show your latest photo when JavaScript is available but only a link to them when it is turned off.

In order to include non-JavaScript content you can use Ajax or AHAH or Hijax or whatever you want to call Ajax without the XML part! An example for this would be the optional Ajax navigation which goes even further as it only loads the more complex interface when the visitor wants it.

Imaging trickery

The last idea stems from a time that may just have been before you even started developing for the web! Netscape, the ill-fated (but IMHO at that time better) competitor to Internet Explorer during the browser wars had a custom HTML attribute for images called ‘lowsrc’ which enabled you to define an image that was of much smaller file size than the real one and was loaded first and then covered by the real one while it was loading. This allowed you to give even users on ridiculously slow connections a preview of what there is to come.

You can re-use that idea and not embed large mood imagery in the page when it is loading initially but use more stylized, lighter images that get replaced with the others once the page has been loaded. Or you could go even further and only use background colours at first. You then use JavaScript and the DOM to load the real image when the document has finished loading and cover the preview with it.

This trick can also be immensely effective when you include lots and lots of smaller images from several servers (like gravatars for example) as these are not likely to be cached. Simply use a placeholder graphic initially and replace them with dynamically created images when the page has loaded.

Summary

This is of course only an overview of what is possible, but I hope some of the suggestions make sense and will help you change your site or app to make it more responsive. If you have more tricks, don’t be shy - comment about them.

posted @ 2007-09-27 14:26  阿一(杨正祎)  阅读(436)  评论(0)    收藏  举报
刷新页面返回顶部
博客园  ©  2004-2026
浙公网安备 33010602011771号 浙ICP备2021040463号-3