Building Next Generation Web Sites
COURTESY :- vrindawan.in
Wikipedia
The next-generation network (NGN) is a body of key architectural changes in telecommunication core and access networks. The general idea behind the NGN is that one network transports all information and services (voice, data, and all sorts of media such as video) by encapsulating these into IP packets, similar to those used on the Internet. NGNs are commonly built around the Internet Protocol, and therefore the term all IP is also sometimes used to describe the transformation of formerly telephone-centric networks toward NGN.
NGN is a different concept from Future Internet, which is more focused on the evolution of Internet in terms of the variety and interactions of services offered.
According to ITU-T, the definition is:
- A next-generation network (NGN) is a packet-based network which can provide services including Telecommunication Services and is able to make use of multiple broadband, quality of service-enabled transport technologies and in which service-related functions are independent from underlying transport-related technologies. It offers unrestricted access by users to different service providers. It supports generalized mobility which will allow consistent and ubiquitous provision of services to users.
From a practical perspective, NGN involves three main architectural changes that need to be looked at separately:
- In the core network, NGN implies a consolidation of several (dedicated or overlay) transport networks each historically built for a different service into one core transport network (often based on IP and Ethernet). It implies amongst others the migration of voice from a circuit-switched architecture (PSTN) to VoIP, and also migration of legacy services such as X.25, frame relay (either commercial migration of the customer to a new service like IP VPN, or technical emigration by emulation of the “legacy service” on the NGN).
- In the wired access network, NGN implies the migration from the dual system of legacy voice next to xDSL setup in local exchanges to a converged setup in which the DSLAMs integrate voice ports or VoIP, making it possible to remove the voice switching infrastructure from the exchange.
- In the cable access network, NGN convergence implies migration of constant bit rate voice to CableLabs PacketCable standards that provide VoIP and SIP services. Both services ride over DOCSIS as the cable data layer standard.
In an NGN, there is a more defined separation between the transport (connectivity) portion of the network and the services that run on top of that transport. This means that whenever a provider wants to enable a new service, they can do so by defining it directly at the service layer without considering the transport layer – i.e. services are independent of transport details. Increasingly applications, including voice, tend to be independent of the access network (de-layering of network and applications) and will reside more on end-user devices (phone, PC, set-top box).
Next-generation networks are based on Internet technologies including Internet Protocol (IP) and multiprotocol label switching (MPLS). At the application level, Session Initiation Protocol (SIP) seems to be taking over from ITU-T H.323.
Initially H.323 was the most popular protocol, though its popularity decreased in the “local loop” due to its original poor traversal of network address translation (NAT) and firewalls. For this reason as domestic VoIP services have been developed, SIP has been more widely adopted. However, in voice networks where everything is under the control of the network operator or telco, many of the largest carriers use H.323 as the protocol of choice in their core backbones. With the most recent changes introduced for H.323, it is now possible for H.323 devices to easily and consistently traverse NAT and firewall devices, opening up the possibility that H.323 may again be looked upon more favorably in cases where such devices encumbered its use previously. Nonetheless, most of the telcos are extensively researching and supporting IP Multimedia Subsystem (IMS), which gives SIP a major chance of being the most widely adopted protocol.
For voice applications one of the most important devices in NGN is a Softswitch – a programmable device that controls Voice over IP (VoIP) calls. It enables correct integration of different protocols within NGN. The most important function of the Softswitch is creating the interface to the existing telephone network, PSTN, through Signalling Gateways and Media Gateways. However, the Softswitch as a term may be defined differently by the different equipment manufacturers and have somewhat different functions.
One may quite often find the term Gatekeeper in NGN literature. This was originally a VoIP device, which converted (using gateways) voice and data from their analog or digital switched-circuit form (PSTN, SS7) to the packet-based one (IP). It controlled one or more gateways. As soon as this kind of device started using the Media Gateway Control Protocol, the name was changed to Media Gateway Controller (MGC).
A Call Agent is a general name for devices/systems controlling calls.
The IP Multimedia Subsystem (IMS) is a standardised NGN architecture for an Internet media-services capability defined by the European Telecommunications Standards Institute (ETSI) and the 3rd Generation Partnership Project (3GPP).
In the UK, another popular acronym was introduced by BT (British Telecom) as 21CN (21st Century Networks, sometimes mistakenly quoted as C21N) — this is another loose term for NGN and denotes BT’s initiative to deploy and operate NGN switches and networks in the period 2006–2008 (the aim being by 2008 BT to have only all-IP switches in their network). The concept was abandoned, however, in favor of maintaining current-generation equipment.
The first company in the UK to roll out a NGN was THUS plc which started deployment back in 1999. THUS’ NGN contains 10,600 km of fibre optic cable with more than 190 points of presence throughout the UK. The core optical network uses dense wavelength-division multiplexing (DWDM) technology to provide scalability to many hundreds of gigabits per second of bandwidth, in line with growth demand. On top of this, the THUS backbone network uses MPLS technology to deliver the highest possible performance. IP/MPLS-based services carry voice, video and data traffic across a converged infrastructure, potentially allowing organisations to enjoy lower infrastructure costs, as well as added flexibility and functionality. Traffic can be prioritised with Classes of Service, coupled with Service Level Agreements (SLAs) that underpin quality of service performance guarantees. The THUS NGN accommodates seven Classes of Service, four of which are currently offered on MPLS IP VPN.
In the Netherlands, KPN is developing an NGN in a network transformation program called all-IP. Next Generation Networks also extends into the messaging domain and in Ireland, Openmind Networks has designed, built and deployed Traffic Control to handle the demands and requirements of all IP networks.
In Bulgaria, BTC (Bulgarian Telecommunications Company) has implemented the NGN as underlying network of its telco services on a large-scale project in 2004. The inherent flexibility and scalability of the new core network approach resulted in an unprecedented rise of classical services deployment as POTS/ISDN, Centrex, ADSL, VPN, as well as implementation of higher bandwidths for the Metro and Long-distance Ethernet / VPN services, cross-national transits and WebTV/IPTV application.
In February 2014, Deutsche Telekom revealed that its subsidiary Makedonski Telekom had become the first European incumbent to convert its PSTN infrastructure to an all IP network. It took just over two years for all 290,000 fixed lines to be migrated onto the new platform. The capital investment worth 14 million euros makes Macedonia the first country in the South-East Europe whose network will be fully based on Internet protocol.
In Canada, startup Wind Mobile owned by Globalive is deploying an all-ip wireless backbone for its mobile phone service.
In mid 2005, China Telecom announced its commercial roll-out of China Telecom’s Next Generation Carrying Network, or CN2, using Internet Protocol Next-Generation Network (IP NGN) architecture. Its IPv6-capable backbone network leverages softswitches (the control layer) and protocols like DiffServ and MPLS, which boosts performance of its bearer layer. The MPLS-optimized architecture also enables Frame Relay and ATM traffic to be transported over a Layer 2 VPN, which supports both legacy traffic and new IP services over a single IP/MPLS network.
Web 2.0 (also known as participative (or participatory) web and social web) refers to websites that emphasize user-generated content, ease of use, participatory culture and interoperability (i.e., compatibility with other products, systems, and devices) for end users.
The term was coined by Darcy DiNucci in 1999 and later popularized by Tim O’Reilly and Dale Dougherty at the first Web 2.0 Conference in late 2004. Although the term mimics the numbering of software versions, it does not denote a formal change in the nature of the World Wide Web, but merely describes a general change that occurred during this period as interactive websites proliferated and came to overshadow the older, more static websites of the original Web.
A Web 2.0 website allows users to interact and collaborate with each other through social media dialogue as creators of user-generated content in a virtual community. This contrasts the first generation of Web 1.0-era websites where people were limited to viewing content in a passive manner. Examples of Web 2.0 features include social networking sites or social media sites (e.g., Facebook), blogs, wikis, folksonomies (“tagging” keywords on websites and links), video sharing sites (e.g., YouTube), image sharing sites (e.g., Flickr), hosted services, Web applications (“apps”), collaborative consumption platforms, and mashup applications.
Whether Web 2.0 is substantially different from prior Web technologies has been challenged by World Wide Web inventor Tim Berners-Lee, who describes the term as jargon. His original vision of the Web was “a collaborative medium, a place where we [could] all meet and read and write”. On the other hand, the term Semantic Web (sometimes referred to as Web 3.0) was coined by Berners-Lee to refer to a web of content where the meaning can be processed by machines.
Web 1.0 is a retronym referring to the first stage of the World Wide Web’s evolution, from roughly 1991 to 2004. According to Graham Cormode and Balachander Krishnamurthy, “content creators were few in Web 1.0 with the vast majority of users simply acting as consumers of content”. Personal web pages were common, consisting mainly of static pages hosted on ISP-run web servers, or on free web hosting services such as Tripod and the now-defunct GeoCities. With Web 2.0, it became common for average web users to have social-networking profiles (on sites such as Myspace and Facebook) and personal blogs (sites like Blogger, Tumblr and LiveJournal) through either a low-cost web hosting service or through a dedicated host. In general, content was generated dynamically, allowing readers to comment directly on pages in a way that was not common previously.
Some Web 2.0 capabilities were present in the days of Web 1.0, but were implemented differently. For example, a Web 1.0 site may have had a guestbook page for visitor comments, instead of a comment section at the end of each page (typical of Web 2.0). During Web 1.0, server performance and bandwidth had to be considered—lengthy comment threads on multiple pages could potentially slow down an entire site. Terry Flew, in his third edition of New Media, described the differences between Web 1.0 and Web 2.0 as a
“move from personal websites to blogs and blog site aggregation, from publishing to participation, from web content as the outcome of large up-front investment to an ongoing and interactive process, and from content management systems to links based on “tagging” website content using keywords (folksonomy).”
Flew believed these factors formed the trends that resulted in the onset of the Web 2.0 “craze”.
Some common design elements of a Web 1.0 site include:
- Static pages rather than dynamic HTML.
- Content provided from the server’s filesystem rather than a relational database management system (RDBMS).
- Pages built using Server Side Includes or Common Gateway Interface (CGI) instead of a web application written in a dynamic programming language such as Perl, PHP, Python or Ruby.
- The use of HTML 3.2-era elements such as frames and tables to position and align elements on a page. These were often used in combination with spacer GIFs.
- Proprietary HTML extensions, such as the <blink> and <marquee> tags, introduced during the first browser war.
- Online guestbooks.
- GIF buttons, graphics (typically 88×31 pixels in size) promoting web browsers, operating systems, text editors and various other products.
- HTML forms sent via email. Support for server side scripting was rare on shared servers during this period. To provide a feedback mechanism for web site visitors, mailto forms were used. A user would fill in a form, and upon clicking the form’s submit button, their email client would launch and attempt to send an email containing the form’s details. The popularity and complications of the mailto protocol led browser developers to incorporate email clients into their browsers.
The term “Web 2.0” was coined by Darcy DiNucci, an information architecture consultant, in her January 1999 article “Fragmented Future”:
The Web we know now, which loads into a browser window in essentially static screenfuls, is only an embryo of the Web to come. The first glimmerings of Web 2.0 are beginning to appear, and we are just starting to see how that embryo might develop. The Web will be understood not as screenfuls of text and graphics but as a transport mechanism, the ether through which interactivity happens. It will […] appear on your computer screen, […] on your TV set […] your car dashboard […] your cell phone […] hand-held game machines […] maybe even your microwave oven.
Writing when Palm Inc. introduced its first web-capable personal digital assistant (supporting Web access with WAP), DiNucci saw the Web “fragmenting” into a future that extended beyond the browser/PC combination it was identified with. She focused on how the basic information structure and hyper-linking mechanism introduced by HTTP would be used by a variety of devices and platforms. As such, her “2.0” designation refers to the next version of the Web that does not directly relate to the term’s current use.
The term Web 2.0 did not resurface until 2002. Kinsley and Eric focus on the concepts currently associated with the term where, as Scott Dietzen puts it, “the Web becomes a universal, standards-based integration platform”. In 2004, the term began to popularize when O’Reilly Media and MediaLive hosted the first Web 2.0 conference. In their opening remarks, John Battelle and Tim O’Reilly outlined their definition of the “Web as Platform”, where software applications are built upon the Web as opposed to upon the desktop. The unique aspect of this migration, they argued, is that “customers are building your business for you”. They argued that the activities of users generating content (in the form of ideas, text, videos, or pictures) could be “harnessed” to create value. O’Reilly and Battelle contrasted Web 2.0 with what they called “Web 1.0”. They associated this term with the business models of Netscape and the Encyclopædia Britannica Online. For example,
“Netscape framed ‘the web as platform’ in terms of the old software paradigm: their flagship product was the web browser, a desktop application, and their strategy was to use their dominance in the browser market to establish a market for high-priced server products. Control over standards for displaying content and applications in the browser would, in theory, give Netscape the kind of market power enjoyed by Microsoft in the PC market. Much like the ‘horseless carriage’ framed the automobile as an extension of the familiar, Netscape promoted a ‘webtop’ to replace the desktop, and planned to populate that webtop with information updates and applets pushed to the webtop by information providers who would purchase Netscape servers.
In short, Netscape focused on creating software, releasing updates and bug fixes, and distributing it to the end users. O’Reilly contrasted this with Google, a company that did not, at the time, focus on producing end-user software, but instead on providing a service based on data, such as the links that Web page authors make between sites. Google exploits this user-generated content to offer Web searches based on reputation through its “PageRank” algorithm. Unlike software, which undergoes scheduled releases, such services are constantly updated, a process called “the perpetual beta”. A similar difference can be seen between the Encyclopædia Britannica Online and Wikipedia – while the Britannica relies upon experts to write articles and release them periodically in publications, Wikipedia relies on trust in (sometimes anonymous) community members to constantly write and edit content. Wikipedia editors are not required to have educational credentials, such as degrees, in the subjects in which they are editing. Wikipedia is not based on subject-matter expertise, but rather on an adaptation of the open source software adage “given enough eyeballs, all bugs are shallow”. This maxim is stating that if enough users are able to look at a software product’s code (or a website), then these users will be able to fix any “bugs” or other problems. The Wikipedia volunteer editor community produces, edits, and updates articles constantly. Web 2.0 conferences have been held every year since 2004, attracting entrepreneurs, representatives from large companies, tech experts and technology reporters.
The popularity of Web 2.0 was acknowledged by 2006 TIME magazine Person of The Year (You). That is, TIME selected the masses of users who were participating in content creation on social networks, blogs, wikis, and media sharing sites.
In the cover story, Lev Gross man explains:
“It’s a story about community and collaboration on a scale never seen before. It’s about the cosmic compendium of knowledge Wikipedia and the million-channel people’s network YouTube and the online metropolis MySpace. It’s about the many wresting power from the few and helping one another for nothing and how that will not only change the world but also change the way the world changes.”
Instead of merely reading a Web 2.0 site, a user is invited to contribute to the site’s content by commenting on published articles, or creating a user account or profile on the site, which may enable increased participation. By increasing emphasis on these already-extant capabilities, they encourage users to rely more on their browser for user interface, application software (“apps”) and file storage facilities. This has been called “network as platform” computing. Major features of Web 2.0 include social networking websites, self-publishing platforms (e.g., WordPress’ easy-to-use blog and website creation tools), “tagging” (which enables users to label websites, videos or photos in some fashion), “like” buttons (which enable a user to indicate that they are pleased by online content), and social bookmarking.
Users can provide the data and exercise some control over what they share on a Web 2.0 site. These sites may have an “architecture of participation” that encourages users to add value to the application as they use it. Users can add value in many ways, such as uploading their own content on blogs, consumer-evaluation platforms (e.g. Amazon and eBay), news websites (e.g. responding in the comment section), social networking services, media-sharing websites (e.g. YouTube and Instagram) and collaborative-writing projects. Some scholars argue that cloud computing is an example of Web 2.0 because it is simply an implication of computing on the Internet.