Internet
Introduction
Developed in North America, the Internet has rapidly spread around the world in the past decade. A high percentage of the content on the Internet still resides inside the USA, as much as 80 percent by some estimates (Liebowitz, 1998). Access to this content for international users suffers from the characteristics of the long Network path between these users and the remote server. Lack of capacity and long delays effectively render Network delivery of media-rich information and advanced digital libraries impracticable for many countries. Thus, while Internet connectivity to virtually every country on the globe has brought the promise of an information-rich world for all, that reality often falls short in the face of limited and unpredictable access to network information resources.
This article seeks to inform the information professional on dramatic enhancements to current Internet services that are being enabled by technology trends and new methods of Network engineering. While a user's local environment (computer, access lines, etc.) ultimately shapes the range of applications available to them, the trends we discuss address improvements in the fundamental service of improving the speed and availability of information for users spread across the global Internet. Such dramatic improvements in the quality of access and the range of data sets available in a timely fashion to the user can enhance the benefits of Internet access significantly by enabling new classes of applications.
While the focus of the article is to motivate and explain important new Internet technology, it is important to acknowledge the multi-faceted nature of Internet access in order to provide the appropriate perspective on purely technological advances. Accordingly, the paper begins with a look at the multiple dimensions of Internet diffusion and barriers to Internet access at several levels. We then outline innovative new data replication methods, collectively termed logistical networking by some researchers (Beck et al., 1999b), which are being used to solve the problems of very long distance access to WWW servers. We discuss the application of this concept of data staging in the Network by a variety of projects, especially an Internet2 research effort, and give examples of how this approach is being used for specific digital collections. As emphasized in our conclusion, the movement towards flexible data staging in the Network is changing the quality of Internet delivery for all users of the Internet, but it is an especially important trend for users accessing content over trans-oceanic links or in bandwidth-poor sections of the global Internet.
The reach of the global Internet
Access to information delivered over the Internet is best characterized on a sliding scale or, more accurately, on a multi-dimensional scale. Ultimately, individuals and organizations have very different information needs, and the connection between access and benefits must be scrutinized carefully (Jimba and Atinmo, 2000; Watters et al., 1998). Still, collective Internet access patterns offer a macro-scale view of Internet diffusion as a starting point.
At the level of simple connectivity to the Internet for some groups of users, almost every country on the globe can be said to be on the Internet. More usefully, Press et al. (1998) proposed consideration of the following dimensions in assessing the dispersion of the global Internet to individual countries:
pervasiveness, which is the degree to which Internet use is reflected in users per capita and in the number of nontechnician users;
geographical dispersion, which is the degree to which Internet access is widely available in a country (as opposed to only in one part of a country);
sectoral absorption, which is the degree of Internet utilization in key organizational sectors, specifically education, commercial, health care, and public affairs;
connectivity infrastructure, which is the measure of international and intranational backbone bandwidth, exchange points, and last-mile access methods;
organizational infrastructure, which is the state of the ISP industry and market competitiveness and degree of innovation in the marketplace for Internet services; and
sophistication of use, which is the characterization of usage from conventional to highly sophisticated and driving innovation.
In these measures of collective access, there is a subtle underlying issue of the Network performance. Network performance is a dynamic characteristic of a user's computing experience, and it is influenced by the connectivity infrastructure and the computer used for Network access. The delays experienced by users and their overall Network experience has a very real effect not only on what they do with the Network (adoption rates and sophistication of use), but how they think about the Network as an information resource. Later in this article we discuss new technology for Internet data delivery that will improve the user experience even without upgrades in connectivity infrastructure. We argue that these new engineering efforts that are visibly improving the delay characteristics of the WWW are more than mere technical feats - they will matter to all Internet users and most especially to international users!
Barriers to Internet access
The barriers to high-quality Internet access (broadly defined) are often varied. While this article focuses on technology advances, it is good to remember that technology is only one component of the overall picture. Briefly, in addition to technology components (discussed in the next section), here are key pieces in the Internet access puzzle within a country.
Telecommunications policy
It is important to place Internet access within the context of vast changes taking place in the communication and computing industries and the subsequent impact on regulatory frameworks. Until recently, Internet service providers were a separate industry from telecommunications and broadcasting (T&B). T&B need strong quality-of-service guarantees in the Network and this has long justified separate networks for these services. However, as digital technologies have pervaded all media processing and the Internet has continued to increase in bandwidth, the line between Internet services and T&B services has blurred considerably with convergence taking place, in fact, at all places in the value chain from content creation to end-user consumption (Tadayoni and Kristensen, 1999).
An important implication of this grand convergence has been the need for an adaptation of the regulatory frameworks governing T&B industries in virtually all parts of the world. Regulatory bodies have been slow to adapt in part because of the enormous challenges posed by the global nature of communications. It is difficult to develop laws and economic frameworks that properly address issues such as universal access, quality of information, and cultural development within the context of an increasingly global broadcasting industry.
In many countries, deregulation of the telecommunications monopolies of the past has not yet taken place, limiting competition and curtailing Internet access in the "last-mile". For example, writing in 1998, Burkhart et al. (1998) state flatly: "Government policy is the principal constraint on Internet development." Moreover, beyond telecommunications monopolies that stifle competition in the access domain, government bureaucracies can stifle Internet access in other ways, sometimes intentionally and other times coincidentally. Authoritarian regimes, that see information technology generally as a threat to their power, control (or attempt to control) its use: in Libya, for example, Danowitz et al. ...
This is a preview of the whole essay
In many countries, deregulation of the telecommunications monopolies of the past has not yet taken place, limiting competition and curtailing Internet access in the "last-mile". For example, writing in 1998, Burkhart et al. (1998) state flatly: "Government policy is the principal constraint on Internet development." Moreover, beyond telecommunications monopolies that stifle competition in the access domain, government bureaucracies can stifle Internet access in other ways, sometimes intentionally and other times coincidentally. Authoritarian regimes, that see information technology generally as a threat to their power, control (or attempt to control) its use: in Libya, for example, Danowitz et al. (1995) report that "computers, telephones, fax machines, and other communications devices ... must be registered with the government." Less directly, bureaucratic regulation simply complicates equipment purchase in many countries. As a counterbalance, explosive demand for Internet access and the realities of an increasingly globalized economy are putting great pressure on government policy-makers to acknowledge the need for and implement policies friendly to the spread of Internet access.
Economic issues
Simple economics are another barrier to Internet diffusion: computing devices and Internet services are expensive. Internet access providers in non-competitive markets such as the monopolistic PTTs noted above obviously can charge their customers high prices. But before considering the cost of access itself, a user (or organization) must be able to afford a personal computer. To put the cost of computers in perspective, a standard analog phone can now be built with less than one dollar's worth of hardware components and that phone draws its power from the telecommunications network to which it is connected. By contrast, Internet computers in the low-end "appliance category" of a few hundred dollars per machine are still rare in the marketplace. As a result, we see the penetration of PCs far behind that of telephones in areas of the world with low per capita incomes, e.g. in Burkhart et al. (1998) the 1996 density of phones in India is reported as 1.54 phones/100 people whereas PCs are 0.15 PCs/100 people.
A promising trend (at least for modestly wealthy countries) is the appearance of Internet service providers experimenting with new revenue models offering free Internet access. These ISPs give customers free Internet access in exchange for the collection of demographic information and/or the distribution of targeted advertising. Where phone density rates are relatively high and local phone calls relatively inexpensive, free Internet access could enable many new users to connect to the Internet. One WWW site catalogs free ISPs in 23 countries as of May 2000[1].
Cultural factors
Another factor found relevant to Internet diffusion has been the cultural aspects of adopting and using information technology, that is, computer use is a social phenomenon. In this context, it is important to recognize that the Internet was pioneered in the USA and used almost exclusively in a handful of countries until relatively recently. In many parts of the world, factors such as language and education limit the use and usefulness of the Internet, though, on the technology front, the rapid globalization of the WWW is leading to more focus and commercial interest in developing and supporting multilingual software and content. To underscore this point, some estimates place the number of non-English speaking Internet users at roughly 50 percent of all Internet users today [2], and the dramatic growth in Internet access in Asia will tip this balance rapidly towards non-English speakers in the coming years.
In some instances, there is resistance to the Internet as a symbol of the rapid globalization of communication and information that is threatening local cultures. As a counterbalance, however, the value of supporting Internet infrastructure not only as an economic boon but also as a means for strengthening local and regional identity has also been documented in focused regional networking projects such as Bulashova and Cole (1999). As an example of the complex sociological and psychological factors underlying adoption of information technology, consider that, as noted in Danowitz et al. (1995), some areas of the world where cheap labor abounds (e.g. North Africa) find an inherent conflict between the labor-saving nature of information technology and the employment-ensuring nature of the public-service sector.
Technology challenges
While connectivity is the most basic form of access, Internet access has a sliding scale of usefulness dependent on the interactive quality, size, and richness of media that is in practice available to the connected user. In the past the WWW most often involves a single server to which users (clients) send requests. Very busy content providers such as Yahoo!now commonly have more than one physical machine, all with identical content, located in the same building, known as a server farm. Requests arriving over the Internet are shuttled to an available server as they arrive. Server farms enable load balancing and scale up the number of requests that can be simultaneously served.
Server farms cannot address delays and bandwidth constraints associated with clients either far away physically or located in bandwidth-poor parts of the network (or both). For interactive applications like the WWW, ensuring a low delay for the user when accessing a data object is critical to maintaining a high quality experience for the user. Over long Network paths, delays accumulate from many intermediate routing devices having to receive and redirect the information across the Network. In extreme cases of trans-oceanic fiber or satellite links, the delay propagation factor (i.e. bits travel at roughly the speed-of-light across a link) itself becomes a factor. Satellite links especially introduce noticeable delay, as anyone who has had a long-distance call over satellite connections can attest.
At the same time, long Network paths tend to be congested. Despite the advances in fiber optic speeds, wide-area Network links today in many parts of the Internet experience congestion, sometimes chronically so. The expense of long-distance runs is quite high, which encourages aggressive sharing of these links. User demands for bandwidth, moreover, continue to scale up with the increased backbone capacities, at least for now. Thus, while bandwidth may become plentiful in the future, it remains a constraint today for international users and, coupled with high delay, a throttle on the range of available media as well.
Data replication for global information delivery
Powerful technology trends are creating ever more powerful and cost-effective mass storage systems and high-speed networks. Storage costs have been dropping tremendously in recent years. For example, one expert points out that the costs of computer disk and computer memory have each dropped by over 1,000 times in the past two decades. Moreover, this trend of dramatic advances in this area has considerable momentum to continue (even possibly accelerate) for the foreseeable future (Gray, 1999). At the same time, Network links continue to grow in bandwidth capacity with the advances in fiber optic technology, especially a technology called Dense Wave-Division Multiplexing (DWDM). Fiber optic transmission uses lasers to send light pulses that carry the bits of information over fiber optic cables. DWDM is a technology to allow lasers to use multiple colors of light, instead of the single light pulse used today, at the same time. The effect is to multiply the bandwidth from a single cable many times, for example, by 40 to 120 times for current DWDM hardware. These explosive advances in fundamental technologies for networks and storage systems bode well for content replication schemes that are properly designed to scale up with increasing storage and network capacities.
New data replication strategies: the way forward
Many projects and new commercial ventures are positioning themselves to leverage the opportunities afforded by the trends in mass storage and high-speed networks discussed above. Below we discuss a few commercial efforts and a high-profile academic project within the Internet2 project within the USA. Again, it is important to emphasize that these efforts have different goals (e.g., commercial WWW service versus experimental applications) and different audiences, but the underlying technology solutions are proving to have enormous appeal. In fact, these efforts are already widely deployed and accepted - many Internet engineers believe data replication as envisioned in these new systems has already become the mainstream solution for the Internet going forward.
Replicating content within the Network closer to the end-user is not a new idea. It is done today in two ways. So-called WWW caches are computers inside the Network that store the page fetched by a URL so that a later request for the same URL need not travel all the way back to the server. Instead, the cache itself sends the user the saved response. WWW caching has been deployed for some time in the Internet and continues to be an important way that Internet access can be improved (Iyengar and Challenger, 1997; Rodriguez et al., 1999). For example, many countries have a hierarchy of caches, e.g. in the JANET project in the UK[3]. The effectiveness of caching, however, is limited by several well-understood factors. Important among these has been the increasing prevalence of dynamic content, that is, you cannot cache the news headlines from CNN.com or stock quotes from a site because they change frequently. Also, many large WWW servers are reluctant to give up control over their content and thus explicitly disallow caches from making a copy of their pages. Furthermore, server-side functions like counting the number of users that visit a site or customizing the user's view do not work when pages are served from a cache.
Another way to position data closer to the user is by explicitly copying the data themselves to another (WWW) server, or mirroring. Mirroring can be done on an ad hoc basis usually with very loose cooperation between the mirror sites. Ad hoc mirroring is widely practiced in the contemporary Internet. A recent study (Bharat and Broder, 1999) based on a set of 179 million URLs collected in a Web crawl found that about 10 percent of all hosts were mirrored to varying degrees. Mirroring has a number of advantages. Source-object replication provides explicit control over data placement and data freshness policies. Replicating collections improves client access by shortening the Network path from the client to the server, which is especially crucial in the global Internet where the bandwidth limitations of trans-oceanic links and the fundamental latency constraint of speed-of-light signal propagation are major factors. Well-placed mirrors can localize access to regional networks or even specific local area networks (LANs) for important clients. Regional or local area Networks are inherently fast and reliable, due to the transmission technologies and the tendency of organizations to overprovision the network. While composed of high-capacity fiber optic links, wide-area networks (WANs), by contrast, are heavily utilized due to the high cost of installing and managing very long-distance lines.
In the past, mirroring Internet-based collections has been a largely informal, ad hoc process based on data management scripts and system utilities. This unstructured approach to mirrors results in management difficulties and usability issues for the end user. For example, users are most often forced to manually determine an appropriate mirror site, e.g. Apache.org lists 150 mirrors from which the user selects by domain suffix. While workable, such manual resolution is a source of frustration for users and inefficiency in minimizing the user's access time and maximizing the load-balancing utility of the mirroring process.
Proprietary commercial mirroring software has recently introduced a new class of replication services. Akamai's FreeFlow technology[4] uses server-side scripts to rewrite URLs embedded in WWW documents such that selected embedded objects will be fetched from one of the more than 2,000 Akamai servers world-wide. Similarly, companies such as Digital Island are providing sophisticated application hosting and content management services using a set of globally distributed servers.
In the commercial domain companies have recently appeared that are attempting to use the intuitive paradigm of a shared filesystem as a basis for new Network-based storage services. Distributed filesystems such as Andrew have existed for some time, but these new efforts attempt to extend shared files within network-based storage servers to very large numbers of desktop PC users. For example, I-Drive's service interface[5] extends the familiar desktop computing notion of a Network disk that appears in a standard view on MS Windows systems to Internet-based storage. Using the drive metaphor, users can copy and share files on their Internet drive (I-drive) as on other Network drives. Value-added services such as printing services are also offered through the hosting service.
The I2-DSI project
Within the academic research community, the Internet2 Distributed Storage Infrastructure (I2-DSI) Project[6] (Beck and Moore, 1998; Beck et al., 1999a) is developing a novel distributed network storage solution as part of Internet2 technical activities. The I2-DSI project seeks to explore the research, engineering, and operational issues behind designing, evaluating, and deploying scalable replication services based on middleware running in conjunction with dedicated replication servers across the Internet2 backbone network. I2-DSI is foremost a research project that has engaged academic researchers as well as significant corporate sponsors interested in the convergence of network and storage systems. Current corporate sponsors include Cisco, IBM, Novell, Microsoft, Ellemtel, Sun, and Starburst Communications. The common goal among participants is to augment the state-of-the-art in understanding how to build and use replication services, first for the Internet2 community and then for Internet users broadly defined.
Working within the Internet2 project, I2-DSI has the flexibility to deploy and experiment with solutions that might be resisted or limited in the commodity Internet, e.g. specialized client software or server configurations. Unlike key components of commercial solutions, the middleware developed by the project will have an open architecture, based on open-source software whenever possible, so that its solutions can propagate widely. Open source software from the I2-DSI project is already propagating to international partners interested in mirroring.
The I2-DSI software architecture supports content replication across a set of distributed server machines. Under I2-DSI, a set of related objects, a "content channel" (Beck et al., 1999b), is published by an authorized user group. As part of the initial publication of the content, metadata on the channel are collected in a channel metadata repository for channel management and user services. Within the I2-DSI core, replication software stages the objects in a content channel at a master node and then replicates the channel to a subset of the I2-DSI platforms. Clients accessing the channel require a resolution service to aid them in finding a "nearby server". As with replication mechanisms, multiple resolution mechanisms are available from both the commercial and research projects.
I2-DSI has deployed a national testbed of storage servers. These servers are shown as the tall machines in Figure 1. Operational servers with over 100 GB of storage (one with 700 GB of disk) are now online in North Carolina, Tennessee, South Dakota, Indiana, and Texas. A number of candidate sites both in North America and abroad are available for near-term expansion of the testbed, as shown in the Figure at the starred sites. The initial software profile of these servers includes a collection of open-source tools, some customized for the DSI project, that are managed by DSI-specific scripts and programs. A small set of demonstration channels have been constructed and deployed across five of the testbed servers to exercise the replication logic. Dynamic resolution is provided by a Cisco Distributed Director hardware device located on the Internet2 backbone. Experimental publication interfaces are also in the early developmental stages.
Examples of Internet2 DSI content distribution models
I2-DSI has deployed a number of content channels to experiment with content distribution models for replication services and to gain experience with replicated collections. We give some specific examples here to make concrete the potential of a distributed storage system on the scale of I2-DSI.
Streaming video collections
Streamed digital video, that is, video files delivered by a streaming server to a client for immediate playback from the network, is a class of content delivery that benefits greatly from replication. The size and real-time delivery constraints of streaming video stress server and Network resources as the number of possible concurrent streams increase at the video server.
Delay and bandwidth constraints make effective delivery of high-quality video over wide-area Networks very difficult to achieve. This situation will not be solved merely because wide-area network links are gaining ever-greater capacity, due to the advances in fiber-optic transmission technologies. Currently in the Internet, all traffic is given the same priority such that delay-sensitive video (or audio) packets may be delayed by delay-insensitive data traffic. Work on Network quality-of-service to provide different priorities for different classes of traffic is an active area among Internet researchers but, even when such solutions are available, their availability will vary due to different adoption rates by Internet service providers. Fundamentally, any long path through the network between a client and the server it accesses is bounded by the performance limits of the least capable physical network link in the path, making one slow link or one non-QOS-capable link a "killer" for delay-sensitive delivery.
The HighMPEG channel in I2-DSI[7] was developed jointly by the author and his student, Joel Dunn, with streaming MPEG video vendor, Digital Bitcasting, to demonstrate an architecture for scalable replication and delivery of streaming video files. The demonstration content (commercial movie trailers supplied by ccvideo.com) contains files encoded to the MPEG standard at rates up to 1.5 Mbits/sec.
HighMPEG improves delivery performance by localizing the video content near the clients as follows. Video files published in the channel are replicated to I2-DSI servers. Local servers, known as delegate servers, are set up and registered by Internet2 participants who want to make the streaming content available to their local users. These local servers download the video files in the channel from the nearest I2-DSI server using the dynamic resolution mechanisms of I2-DSI. Users access the content by using a standard URL, and the dynamic resolution mechanisms of I2-DSI "throws them" to a local server. It is a policy decision of the local server administrators whether to allow off-site users to access their streaming video server or not.
The HighMPEG model allows the file downloads to be efficient for the local servers. It also ensures that the resources necessary to deliver the video (both the server and the network bandwidth) are local resources, based on an explicit decision to support the delivery locally. This principle allows for scalable content delivery where the resources necessary to support real-time streaming delivery are provided by the communities with interest in the content. This is the notion of local resources being used to implement a global service.
Digital video repository
Another content channel under I2-DSI is a repository of video files to support researchers interested in developing a large searchable library of video clips available to research groups working with digital video. This multi-institutional effort known as the OpenVideo Project[8] is more concerned with replication services as a means to provide a large pool of managed storage with a unified name space for all contributions to the OpenVideo library and an "intelligent download" facility for efficient access to the collection by a widely distributed community of researchers.
To provide a rich searching interface to its content, the OpenVideo community maintains a database with metadata on the encoded video files in its collection. The database and its WWW-based interface have been developed and are hosted outside of the I2-DSI system. When a user finds video files that he would like to download, however, the URLs for those files are given as part of the openvideo.dsi.internet2.edu WWW service. Thus, the downloads of the video collections utilize the replication services of I2-DSI for efficiency.
The OpenVideo model, which applies to any file-based digital repository, demonstrates how the I2-DSI service can complement single-site services. While eventually the searchable database of the OpenVideo project may be replicable, under the current model, the database functionality is maintained at an exterior site while the I2-DSI platforms provide large-scale, distributed storage near the Network backbone for the storage-intensive video repository.
Conventional mirrors
I2-DSI also has content channels that represent an additional distribution mechanism for content that is widely mirrored in the Internet today. Conventional mirrors carried as I2-DSI channels today include the CPAN mirror for PERL and the Linux Archives. By relying on file-oriented replication services, I2-DSI readily accommodates existing tools and content from ad hoc Internet mirrors. A value-added service of the I2-DSI approach is the efficient publishing mechanisms for collection (channel) developers, and experiments with the Linux Archives have validated the scalability of the underlying data replication mechanisms (Dempsey, 2000). Advanced publication services will eventually offer additional control over publication such as the expiration of access based on timestamps that goes well beyond that of conventional mirrors[9] (Zhu et al., 1999). Control of the data publication mechanisms is a key aspect of the distributed server architecture of I2-DSI and similar approaches since it can be very attractive to organizations or distributed communities interested in building up specialized digital collections through a managed publication effort.
Conclusion
Performance challenges in delivering rich digital media and large data sets are limiting the feasibility and utility of Internet delivery for many users, especially those outside of North America. This paper has provided an overview of key technology advances that are changing the performance characteristics of time-sensitive and performance-challenged data sets for users in both the commodity Internet and the high-speed Internet2. The mechanisms for intelligent staging of content in the Network for localized use has enormous potential for all Internet participants, but especially for international users suffering from the delay and bandwidth constraints of long Network paths between them and WWW servers. By enabling the aggregation and localized distribution of rich digital collections through managed publication interfaces, these new data replication schemes are poised to change the content experience available to commodity Internet users everywhere.
We believe that part of the lesson of the Internet has been the explosive innovation that results from new technological capabilities. Just so, the ability to aggregate large databases and utilize rich media will galvanize new projects for bandwidth-poor areas of the global network, possibly unleashing new global-scale applications that have previously been overlooked. While this may be the optimist's view, it is undeniably true that performance enhancements in Internet delivery capabilities will be one part of the puzzle, along with evolving solutions for the political, economic, and cultural dimensions of Internet diffusion, that will greatly enhance the use and usefulness of networked information technology around the world in the coming years.
Notes
http://www.approx.ch/freeisp.html
http://www.glreach.com/globstats/
http://wwwcache.ja.net/
http://www.akamai.com/
http://www.i-drive.com/
http://dsi.internet2.edu/
http://highmpeg.dsi.internet2.edu/
http://openvideo.dsi.internet2.edu/
http://directory.google.com/Top/Computers/Software/Internet/Site_Management/Content_Management/
References
Beck, M, Moore, T (1998), "The Internet2 distributed storage infrastructure project: an architecture for Internet content channels", Computer Networking and ISDN Systems, Vol. 30 No.22-23, pp.2141-8..
Beck, M, Moore, T, Dempsey, B, Chawla, R (1999a), "Portable representation of Internet content channels in I2-DSI", 4th International Web Caching Workshop, San Diego, .
Beck, M, Casanova, H, Dongarra, J, Moore, T, Plank, J.S, Berman, F, Wolski, R (1999b), Computer Communications, Vol. 22 No.11, pp.1034-44.
Bharat, K, Broder, A (1999), "Mirror, mirror on the Web: a study of host pairs with replicated content", Toronto, CA, WWW8 Conference, .
Bulashova, N, Cole, G (1999), "Developing community networks in Russia: the Russian civic networking program", Proceedings of INET '99, San Jose, CA, .
Burkhart, G, Goodman, S, Mehta, A, Press, L (1998), "The Internet in India: better times ahead?", Communications of the ACM, Vol. 41 No.11, pp.21-6..
Danowitz, A, Nassef, Y, Goodman, S (1995), "Cyberspace across the Sahara: computing in North Africa", Communications of the ACM, Vol. 38 No.12, pp.23-8.
Dempsey, B (2000), "Performance analysis of a scalable design for replicating file collections in wide-area networks', special issue on network-based storage services", Journal of Network and Computer Applications, .
Gray, J (1999), "When every disk is a supercomputer, then what?", keynote address at Internet2 1999 Network Storage Symposium (NetStore '99), http://dsi.internet2.edu/netstore99/docs/presentations/keynote/NetStore-keynote-Gray-JG-BC-3-linked.html, .
Iyengar, A, Challenger, J (1997), "Improving Web server performance by caching dynamic data", Proceedings of the UNENIX Symposium on Internet Technologies and Systems, .
Jimba, S.W, Atinmo, M.I. (2000), "The influence of information technology access on agricultural research in Nigeria", Internet Research: Electronic Networking Applications and Policy, Vol. 10 No.1, pp.63-71.
Liebowitz, B (1998), Wireless Access Technologies Magazine Online, http://www.watmag.com/magissues/Issue_3Jul98/loral/loraljuly98.html, .
Press, L, Burkhart, G, Foster, W, Goodman, S, Wolcott, P, Woodard, J (1998), "An Internet diffusion framework", Communications of the ACM, Vol. 41 No.10, pp.21-6.
Rodriguez, P, Spanner, C, Biersack, E (1999), "Web caching architectures: hierarchical and distributed caching", San Diego, CA, 4th International Web Caching Workshop (WCW 99), .
Tadayoni, R, Kristensen, T. (1999), "Universal access in broadcasting: solving the information problems of the digital age?", Proceedings of INET '99, San Jose, CA, .
Watters, P., Watters, M, Carr, S (1998), "Evaluating internet information services in the Asia-Pacific region", Internet Research: Electronic Networking Applications and Policy, Vol. 8 No.3, pp.266-71.
Zhu, H, Smith, B, Yang, T (1999), "Hierarchical resource management for Web server clusters with dynamic content", Proceedings of the International Conference on Measurement and Modeling of Computer Systems, pp.198-9.