Tuesday, September 13, 2011

Promoters

    * Broadcom Corporation
    * Dell, Inc.
    * Intel Corporation
    * LG Electronics Inc.
    * Panasonic Corporation
    * Philips Electronics
    * NEC Corporation
    * Samsung Electronics, Co., Ltd
    * SiBEAM, Inc.
    * Sony Corporation
    * Toshiba Corporation

Wireless HD

WirelessHD is an industry-led effort to define a specification for the next generation wireless digital network interface for wireless high-definition signal transmission for consumer electronics products. The consortium currently has over 40 adopters; key members behind the specification include Broadcom, Intel, LG, Panasonic, NEC, Samsung, SiBEAM, Sony, Philips and Toshiba. The founders intend the technology to be used for the Consumer Electronic devices, PCs, and portable devices alike.

The specification was finalized in January 2008.

Technology

The WirelessHD specification is based on a 7 GHz channel in the 60 GHz Extremely High Frequency radio band. It allows for either compressed (H.264) or uncompressed digital transmission of high-definition video and audio and data signals, essentially making it equivalent of a wireless HDMI. First-generation implementation achieves data rates from 4 Gbit/s, but the core technology allows theoretical data rates as high as 25 Gbit/s (compared to 10.2 Gbit/s for HDMI 1.3 and 21.6 Gbit/s for DisplayPort 1.2), permitting WirelessHD to scale to higher resolutions, color depth, and range.

The 60 GHz band usually requires line of sight between transmitter and receiver, and the WirelessHD specification ameliorates this limitation through the use of beam forming at the receiver and transmitter antennas to increase the signal's effective radiated power. The goal range for the first products will be in-room, point-to-point, non line-of-sight (NLOS) at up to 10 meters. The atmospheric absorption of 60 GHz energy by oxygen molecules limits undesired propagation over long distances and helps control intersystem interference and long distance reception, which is a concern to video copyright owners.[2]

The WirelessHD specification has provisions for content encryption via Digital Transmission Content Protection (DTCP) as well as provisions for network management. A standard remote control allows users to control the WirelessHD devices and choose which device will act as the source for the display.

Personal Computer

PCs with a DVI interface are capable of video output to an HDMI-enabled monitor. Some PCs include an HDMI interface and may also be capable of HDMI audio output, depending on specific hardware. For example, Intel's motherboard chipsets since the 945G have been capable of 8-channel LPCM output over HDMI, as well as NVIDIA’s GeForce 8200/8300 motherboard chipsets. Eight-channel LPCM audio output over HDMI with a video card was first seen with the ATI Radeon HD 4850, which was released in June 2008 and is supported by other video cards in the ATI Radeon HD 4000 series. Linux can support 8-channel LPCM audio over HDMI if the video card has the necessary hardware and supports the Advanced Linux Sound Architecture (ALSA). The ATI Radeon HD 4000 series supports ALSA. Cyberlink announced in June 2008 that they would update their PowerDVD playback software to support 192 kHz/24-bit Blu-ray Disc audio decoding in Q3-Q4 of 2008. Corel's WinDVD 9 Plus currently supports 96 kHz/24-bit Blu-ray Disc audio decoding.

Even with an HDMI output, a computer may not support HDCP, Microsoft's Protected Video Path, or Microsoft's Protected Audio Path. In the case of HDCP, there were several early graphic cards that were labelled as "HDCP-enabled" but did not actually have the necessary hardware for HDCP.[142] This included certain graphic cards based on the ATI X1600 chipset and certain models of the NVIDIA Geforce 7900 series. The first computer monitors with HDCP support started to be released in 2005, and by February 2006, a dozen different models had been released. The Protected Video Path was enabled in graphic cards that supported HDCP, since it was required for output of Blu-ray Disc video. In comparison, the Protected Audio Path was only required if a lossless audio bitstream (such as Dolby TrueHD or DTS-HD MA) was output. Uncompressed LPCM audio, however, does not require a Protected Audio Path, and software programs such as PowerDVD and WinDVD can decode Dolby TrueHD and DTS-HD MA and output it as LPCM. A limitation is that if the computer does not support a Protected Audio Path, the audio must be downsampled to 16-bit 48 kHz but can still output at up to 8 channels. No graphic cards were released in 2008 that supported the Protected Audio Path.

In June 2008, Asus announced Xonar HDAV1.3, which in December 2008 received a software update and became the first HDMI sound card that supported the Protected Audio Path and can both bitstream and decode lossless audio (Dolby TrueHD and DTS-HD MA), although bitstreaming is only available if using the ArcSoft TotalMedia Theatre software. The Xonar HDAV1.3 has an HDMI 1.3 input/output, and Asus says that it can work with most video cards on the market.

HD/DVD Players/Blue Ray Disc

Blu-ray Disc and HD DVD, introduced in 2006, offer new high-fidelity audio features that require HDMI for best results. HDMI 1.3 can transport Dolby Digital Plus, Dolby TrueHD, and DTS-HD Master Audio bitstreams in compressed form.[45] This capability allows for an AV receiver with the necessary decoder to decode the compressed audio stream. The Blu-ray specification does not support video encoded with either Deep Color or xvYCC so that HDMI 1.0 can transfer Blu-ray discs at full video quality.

Blu-ray permits secondary audio decoding, whereby the disc content can tell the player to mix multiple audio sources together before final output.[128] Some Blu-ray and HD DVD players can decode all of the audio codecs internally and can output LPCM audio over HDMI. Multichannel LPCM can be transported over an HDMI connection, and as long as the AV receiver supports multichannel LPCM audio over HDMI and supports HDCP, the audio reproduction is equal in resolution to HDMI 1.3 bitstream output. Some low-cost AV receivers, such as the Onkyo TX-SR506, do not support audio processing over HDMI and are labelled as "HDMI pass through" devices.

Tablets

Some Tablets, such as the Motorola Xoom, BlackBerry PlayBook and Acer Iconia Tab A500, support HDMI using Micro-HDMI (Type D) ports. Others, such as the ASUS Eee Pad Transformer support the standard using Mini-HDMI (Type C) ports. The iPad and iPad 2 have a special A/V adapter that converts Apple's data line to a standard HDMI (Type A) port. Samsung has a similar proprietary thirty-pin port for their Galaxy Tab 10.1 that can adapt to HDMI as well as USB drives. The Dell Streak 5 smartphone/tablet hybrid is capable of outputting over HDMI. Whilst the Streak uses a PDMI port, a separate cradle is available which adds HDMI compatibility.

DVI Port

HDMI is backward-compatible with single-link Digital Visual Interface digital video (DVI-D or DVI-I, but not DVI-A). No signal conversion is required when an adapter or asymmetric cable is used, and consequently no loss in video quality occurs.

From a user's perspective, an HDMI display can be driven by a single-link DVI-D source, since HDMI and DVI-D define an overlapping minimum set of supported resolutions and framebuffer formats to ensure a basic level of interoperability. Since DVI-D displays are not required to support High-bandwidth Digital Content Protection, in the reverse scenario, a DVI-D monitor is not guaranteed to display a signal from an HDMI source. A typical HDMI-source (such as a Blu-ray player) may demand HDCP-compliance of the display, and hence refuse to output HDCP-protected content to a non-compliant display.[93] All HDMI devices must support sRGB encoding.[94] Absent this HDCP issue, an HDMI-source and DVI-D display would enjoy the same level of basic interoperability. Further complicating the issue is the existence of a handful of display equipment (high end home theater projectors) which were designed with HDMI inputs, but which are not HDCP-compliant.

Features specific to HDMI, such as remote-control and audio transport, are not available in devices that use legacy DVI-D signalling. However, many devices output HDMI over a DVI connector (e.g., ATI 3000-series and NVIDIA GTX 200-series video cards),[5] and some multimedia displays may accept HDMI (including audio) over a DVI input. In general, exact capabilities vary from product to product.

HDMI

High-Definition Multimedia Interface (HDMI) is a compact audio/video interface for transmitting uncompressed digital data. It is a digital alternative to consumer analog standards, such as radio frequency (RF) coaxial cable, composite video, S-Video, SCART, component video, D-Terminal, or VGA. HDMI connects digital audio/video sources (such as set-top boxes, DVD players, HD DVD players, Blu-ray Disc players, AVCHD camcorders, personal computers (PCs), video game consoles such as the PlayStation 3 and Xbox 360, and AV receivers) to compatible digital audio devices, computer monitors, video projectors, tablet computers, and digital televisions.
HDMI implements the EIA/CEA-861 standards, which define video formats and waveforms, transport of compressed, uncompressed, and LPCM audio, auxiliary data, and implementations of the VESA EDID. HDMI supports, on a single cable, any uncompressed TV or PC video format, including standard, enhanced, high definition and 3D video signals; up to 8 channels of compressed or uncompressed digital audio; a Consumer Electronics Control (CEC) connection; and an Ethernet data connection.

The CEC allows HDMI devices to control each other when necessary and allows the user to operate multiple devices with one remote control handset. Because HDMI is electrically compatible with the CEA-861 signals used by Digital Visual Interface (DVI), no signal conversion is necessary, nor is there a loss of video quality when a DVI-to-HDMI adapter is used. As an uncompressed CEA-861 connection, HDMI is independent of the various digital television standards used by individual devices, such as ATSC and DVB, as these are encapsulations of compressed MPEG video streams (which can be decoded and output as an uncompressed video stream on HDMI).

Production of consumer HDMI products started in late 2003. Over 850 consumer electronics and PC companies have adopted the HDMI specification (HDMI Adopters). In Europe, either DVI-HDCP or HDMI is included in the HD ready in-store labeling specification for TV sets for HDTV, formulated by EICTA with SES Astra in 2005. HDMI began to appear on consumer HDTV camcorders and digital still cameras in 2006. Shipments of HDMI were expected to exceed that of DVI in 2008, driven primarily by the consumer electronics market.
The HDMI logo with the word HDMI in a large font at the top with the term spelled out below in a smaller font as High-Definition Multimedia Interface. A trademark logo is to the right of HDMI.HDMI-Connector.jpg

Networking cables

Networking Cables are used to connect one network device to other or to connect two or more computers to share printer, scanner etc. Different types of network cables like Coaxial cable, Optical fiber cable, Twisted Pair cables are used depending on the network's topology, protocol and size. The devices can be separated by a few meters (e.g. via Ethernet) or nearly unlimited distances (e.g. via the interconnections of the Internet).

While wireless may be the wave of the future, most computer networks today still utilize cables to transfer signals from one point to another
Twisted pair
Main article: Twisted pair

Twisted pair cabling is a form of wiring in which two conductors (the forward and return conductors of a single circuit) are twisted together for the purposes of canceling out electromagnetic interference (EMI) from external sources; for instance, electromagnetic radiationTwisted pair are classified into two types unshielded twisted pair shielded twisted pair from unshielded twisted pair (UTP) cables, and crosstalk between neighboring pairs.This type of cable is used for home and corporate networks utilising the RJ45 connector ends
 Optical fiber cable
Main article: Optical fiber cable

An optical fiber cable is a cable containing one or more optical fibers. The optical fiber elements are typically individually coated with plastic layers and contained in a protective tube suitable for the environment where the cable will be deployed. Coaxial cable
Main article: coaxial cable

Coaxial lines confine the electromagnetic wave to the area inside the cable, between the center conductor and the shield. The transmission of energy in the line occurs totally through the dielectric inside the cable between the conductors. Coaxial lines can therefore be bent and twisted (subject to limits) without negative effects, and they can be strapped to conductive supports without inducing unwanted currents in them.

The most common use for coaxial cables is for television and other signals with bandwidth of multiple megahertz. Although in most homes coaxial cables have been installed for transmission of TV signals, new technologies (such as the ITU-T G.hn standard) open the possibility of using home coaxial cable for high-speed home networking applications (Ethernet over coax).

In the 20th century they carried long distance telephone connections.
 Patch cable


A patch cable is an electrical or optical cable, used to connect one electronic or optical device to another for signal routing. Devices of different types (ie: a switch connected to a computer, or switch to router) are connected with patch cords, and it works. It is a very fast connection speed. Patch cords are usually produced in many different colors so as to be easily distinguishable,[2] and are relatively short, perhaps no longer than two metres. Ethernet crossover cable
Main article: Ethernet crossover cable

An Ethernet crossover cable is a type of Ethernet cable used to connect computing devices together directly where they would normally be connected via a network switch, hub or router, such as directly connecting two personal computers via their network adapters.
 Power lines

Although power wires are not designed for networking applications, new technologies like Power line communication allows these wires to also be used to interconnect home computers, peripherals or other networked consumer products. On December 2008, the ITU-T adopted Recommendation G.hn/G.9960 as the first worldwide standard for high-speed powerline communications.[3] G.hn also specifies communications over phonelines and coaxial wiring.

List of computers networking device

Common basic networking devices:

    * Router: a specialized network device that determines the next network point to which it can forward a data packet towards the destination of the packet. Unlike a gateway, it cannot interface different protocols. Works on OSI layer 3.
    * Bridge: a device that connects multiple network segments along the data link layer. Works on OSI layer 2.
    * Switch: a device that allocates traffic from one network segment to certain lines (intended destination(s)) which connect the segment to another network segment. So unlike a hub a switch splits the network traffic and sends it to different destinations rather than to all systems on the network. Works on OSI layer 2.
    * Hub: connects multiple Ethernet segments together making them act as a single segment. When using a hub, every attached all the objects, compared to switches, which provide a dedicated connection between individual nodes. Works on OSI layer 1.
    * Repeater: device to amplify or regenerate digital signals received while sending them from one part of a network into another. Works on OSI layer 1.

Some hybrid network devices:

    * Multilayer Switch: a switch which, in addition to switching on OSI layer 2, provides functionality at higher protocol layers.
    * Protocol Converter: a hardware device that converts between two different types of transmissions, such as asynchronous and synchronous transmissions.
    * Bridge Router (B router): CombineS router and bridge functionality and are therefore working on OSI layers 2 and 3.

Hardware or software components that typically sit on the connection point of different networks, e.g. between an internal network and an external network:

    * Proxy: computer network service which allows clients to make indirect network connections to other network services
    * Firewall: a piece of hardware or software put on the network to prevent some communications forbidden by the network policy
    * Network Address Translator: network service provide as hardware or software that converts internal to external network addresses and vice versa

Other hardware for establishing networks or dial-up connections:

    * Multiplexer: device that combines several electrical signals into a single signal
    * Network Card: a piece of computer hardware to allow the attached computer to communicate by network
    * Modem: device that modulates an analog "carrier" signal (such as sound), to encode digital information, and that also demodulates such a carrier signal to decode the transmitted information, as a computer communicating with another computer over the telephone network
    * ISDN terminal adapter (TA): a specialized gateway for ISDN
    * Line Driver: a device to increase transmission distance by amplifying the signal. Base-band networks onl

Sunday, September 11, 2011

computers names and ip addresses

So every interface on every node has an IP address. It was realized quite quickly that humans are pretty bad at remembering numbers, so it was decided (just like phone numbers) to have a directory of names. But since we're using computers anyway, it's nicer to have the computer look up the names for us automatically.

Hence we have the Domain Name System (DNS). There are nodes with well known IP addresses which programs can ask to look up names, and return IP addresses. Almost all programs you will use are capable of doing this, which is why you can put `www.linuxcare.com' into Netscape, instead of `167.216.245.249'.

Of course, you need the IP address of at least one of these `name servers': usually these are kept in the `/etc/resolv.conf' file.

Since DNS queries and responses are fairly small (1 packet each), the TCP protocol is not usually used: it provides automatic retransmission, ordering and general reliability, but at a cost of sending extra packets through the network. Instead we use the very simple `User Datagram Protocol', which doesn't offer any of the fancy TCP features we don't need.

Network Masks

There is one last detail: there is a standard notation for groups of IP addresses, sometimes called a `network address'. Just like a phone number can be broken up into an area prefix and the rest, we can divide an IP address into a network prefix and the rest.
It used to be that people would talk about `the 1.2.3 network', meaning all 256 addresses from 1.2.3.0 to 1.2.3.255. Or if that wasn't a big enough network, they might talk about the `1.2 network' which meant all addresses from 1.2.0.0 to 1.2.255.255.

We usually don't write `1.2.0.0 - 1.2.255.255'. Instead, we shorten it to `1.2.0.0/16'. This weird `/16' notation (it's called a `netmask') requires a little explanation.
Each number between the dots in an IP address is actually 8 binary digits (00000000 to 11111111): we write them in decimal form to make it more readable for humans. The `/16' means that the first 16 binary digits is the network address, in other words, the `1.2.' part is the the network (remember: each digit represents 8 binary digits). This means any IP address beginning with `1.2.' is part of the network: `1.2.3.4' and `1.2.3.50' are, and `1.3.1.1' is not.
To make life easier, we usually use networks ending in `/8', `/16' and `/24'. For example, `10.0.0.0/8' is a big network containing any address from 10.0.0.0 to 10.255.255.255 (over 16 million addresses!). 10.0.0.0/16 is smaller, containing only IP addresses from 10.0.0.0 to 10.0.255.255. 10.0.0.0/24 is smaller still, containing addresses 10.0.0.0 to 10.0.0.255.
To make things confusing, there is another way of writing netmasks. We can write them like IP addresses:

10.0.0.0/255.0.0.0

Finally, it's worth noting that the very highest IP address in any network is reserved as the `broadcast address', which can be used to send a message to everyone on the network at once.
Here is a table of network masks:
Short   Full                    Maximum         Comment
  Form    Form                    #Machines

/8      /255.0.0.0              16,777,215      Used to be called an `A-class'
/16     /255.255.0.0            65,535          Used to be called an `B-class'
/17     /255.255.128.0          32,767
/18     /255.255.192.0          16,383
/19     /255.255.224.0          8,191
/20     /255.255.240.0          4,095
/21     /255.255.248.0          2,047
/22     /255.255.252.0          1,023
/23     /255.255.254.0          511
/24     /255.255.255.0          255             Used to be called a `C-class'
/25     /255.255.255.128        127
/26     /255.255.255.192        63
/27     /255.255.255.224        31
/28     /255.255.255.240        15
/29     /255.255.255.248        7
/30     /255.255.255.252        3

IP Things

So the role of the IP layer is to figure out how to `route' packets to their final destination. To make this possible, every interface on the network needs an `IP address'. An IP address consists of four numbers separated by periods, like `167.216.245.249'. Each number is between zero and 255.
Interfaces in the same network tend to have neighboring IP addresses. For example, `167.216.245.250' sits right next to the machine with the IP address `167.216.245.249'. Remember also that a router is a node with interfaces on more than one network, so the router will have one IP address for each interface.

So the Linux Kernel's IP layer keeps a table of different `routes', describing how to get to various groups of IP addresses. The simplest of these is called a `default route': if the IP layer doesn't know better, this is where it will send a packet onwards to. You can see a list of routes using `/sbin/route'.
Routes can either point to a link, or a particular node which is connected to another network. For example, when you dial up to the ISP, your default route will point to the modem link, because that's where the entire world is.

  Rusty's              ISP's  ~~~~~~~~~~~~ 
   Modem               Modem {            }
       o------------------o { The Internet }
                             {            }
                              ~~~~~~~~~~~~  
But if you have a permanent machine on your network which connects to the outside world, it's a bit more complicated. In the diagram below, my machine can talk directly to Tridge and Paul's machines, and to the firewall, but it needs to know that packets heading the rest of the world need to go to the firewall, which will pass them on. This means that you have two routes: one which says `if it's on my network, just send it straight there' and then a default route which says `otherwise, send it to the firewall'.

                         o  Tridge's
                         |    Work Machine      ~~~~~~~~~~~~
  Rusty's                |                     {            } 
   Work Machine o--------+-----------------o--{ The Internet }
                         |            Firewall {            } 
                         |                      ~~~~~~~~~~~~
                         o  Paul's
                              Work Machine

Interenet working

The question then arises: how come every node on the Internet can talk to the others, if they all use different link-level protocols to talk to each other?

The answer is fairly simple: we need another protocol which controls how stuff flows through the network. The link-level protocol describes how to get from one node to another if they're connected directly: the `network protocol' tells us how to get from one point in the network to any other, going through other links if necessary.

For the Internet, the network protocol is the Internet Protocol (version 4), or `IP'. It's not the only protocol out there (Apple's AppleTalk, Novell's IPX, Digital's DECNet and Microsoft's NetBEUI being others) but it's the most widely adopted. There's a newer version of IP called IPv6, but it's still not common.

So to send a message from one side of the globe to another, your computer writes a bit of Internet Protocol, sends it to your modem, which uses some modem link-level protocol to send it to the modem it's dialed up to, which is probably plugged into a terminal server (basically a big box of modems), which sends it to a node inside the ISP's network, which sends it out usually to a bigger node, which sends it to the next node... and so on. A node which connects two or more networks is called a `router': it will have one interface for each network.

We call this array of protocols a `protocol stack', usually drawn like so:

 [ Application: Handles Porn ]           [ Application Layer: Serves Porn ]
              |                                          ^
              v                                          |
[ TCP: Handles Retransmission ]          [ TCP: Handles Retransmission ]
              |                                          ^
              v                                          |
    [ IP: Handles Routing ]                   [ IP: Handles Routing ]
              |                                          ^
              v                                          |
[ Link: Handles A Single Hop ]           [ Link: Handles A Single Hop ]
              |                                          |
              +------------------------------------------+
So in the diagram, we see Netscape (the Application on top left) retrieving a web page from a web server (the Application on top right). To do this it will use `Transmission Control Protocol' or `TCP': over 90% of the Internet traffic today is TCP, as it is used for Web and EMail.
So Netscape makes the request for a TCP connection to the remote web server: this is handed to the TCP layer, which hands it to the IP layer, which figures out which direction it has to go in, hands it onto the appropriate link layer, which transmits it to the other end of the link.
At the other end, the link layer hands it up to the IP layer, which sees it is destined for this host (if not, it might hand it down to a different link layer to go out to the next node), hands it up to the TCP layer, which hands it to the server.
So we have the following breakdown:

  1. The application (Netscape, or the web server at the other end) decides who it wants to talk to, and what it wants to send).
  2. The TCP layer sends special packets to start the conversation with the other end, and then packs the data into a TCP `packet': a packet is just a term for a chunk of data which passes through a network. The TCP layer hands this packet to the IP layer: it then keeps sending it to the IP layer until the TCP layer at the other end replies to say that it has received it. This is called `retransmission', and has a whole heap of complex rules which control when to retransmit, how long to wait, etc. It also gives each packet a set of numbers, which mean that the other end can sort them into the right order.
  3. The IP layer looks at the destination of the packet, and figures out the next node to send the packet to. This simple act is called `routing', and ranges from really simple (if you only have one modem, and no other network interfaces, all packets should go out that interface) to extremely complex (if you have 15 major networks connected directly to you).

Interenet

The Internet is a WAN which spans the entire globe: it is the largest computer network in existence. The phrase `internetworking' refers to connecting separate networks to build a larger one, hence `The Internet' is the connection of a whole pile of subnetworks.
So now we look at the list above and ask ourselves: what is the Internet's size, physical details and protocols?
The size is already established above: it's global.
The physical details are varied however: each little sub-network is connected differently, with a different layout and physical nature. Attempts to map it in a useful way have generally met with abject failure.
The protocols spoken by each link are also often different: all of the link-level protocols listed above are used, and many more.

OSI reference Model

  • 1. INTRODUCTION
    • 1.1 History of the Concept
  • 2. THE OSI REFERENCE MODEL
    • 2.1 OSI Layer 1: The Physical Layer
    • 2.2 OSI Layer 2: The Data Link Layer
    • 2.3 OSI Layer 3: The Network Layer
    • 2.4 OSI Layer 4: The Transport Layer
    • 2.5 OSI Layer 5: The Session Layer
    • 2.6 OSI Layers 6 & 7: The Presentation & Application Layers
    • 2.7 Summary of the OSI Concept
  • 3. COMMON NETWORK HARDWARE & INFRASTRUCTURE STANDARDS
    • 3.1 Ethernet (IEEE 802.3)
    • 3.2 Wi-Fi (IEEE 802.11x)
    • 3.3 Bluetooth (IEEE 802.15.1)
  • 4. THE INTERNET PROTOCOL SUITE: A LESSON IN PROTOCOL STACKS
    • 4.1 Introduction to the Internet Protocol Suite
    • 4.2 The Internet Protocol Suite Link Layer
    • 4.3 The Internet Protocol Suite Internetwork Layer
    • 4.4 The Internet Protocol Suite Transport Layer
      • TCP
      • UDP
      • RTP
    • 4.5 The Internet Protocol Suite Application Layer
      • The Uniform Resource Locator Concept
      • HTTP
      • HTTPS
      • FTP
      • SSH
    • 4.6 Summary of the Internet Protocol Suite

Basic hardware components

Apart from the physical communications media themselves as described above, networks comprise additional basic hardware building blocks interconnecting their terminals, such as network interface cards (NICs), hubs, bridges, switches, and routers.

[edit] Network interface cards

A network card, network adapter, or NIC (network interface card) is a piece of computer hardware designed to allow computers to physically access a networking medium. It provides a low-level addressing system through the use of MAC addresses.
Each Ethernet network interface has a unique MAC address which is usually stored in a small memory device on the card, allowing any device to connect to the network without creating an address conflict. Ethernet MAC addresses are composed of six octets. Uniqueness is maintained by the IEEE, which manages the Ethernet address space by assigning 3-octet prefixes to equipment manufacturers. The list of prefixes is publicly available. Each manufacturer is then obliged to both use only their assigned prefix(es) and to uniquely set the 3-octet suffix of every Ethernet interface they produce.

[edit] Repeaters and hubs

A repeater is an electronic device that receives a signal, cleans it of unnecessary noise, regenerates it, and retransmits it at a higher power level, or to the other side of an obstruction, so that the signal can cover longer distances without degradation. In most twisted pair Ethernet configurations, repeaters are required for cable that runs longer than 100 meters. A repeater with multiple ports is known as a hub. Repeaters work on the Physical Layer of the OSI model. Repeaters require a small amount of time to regenerate the signal. This can cause a propagation delay which can affect network communication when there are several repeaters in a row. Many network architectures limit the number of repeaters that can be used in a row (e.g. Ethernet's 5-4-3 rule).
Today, repeaters and hubs have been made mostly obsolete by switches (see below).

[edit] Bridges

A network bridge connects multiple network segments at the data link layer (layer 2) of the OSI model. Bridges broadcast to all ports except the port on which the broadcast was received. However, bridges do not promiscuously copy traffic to all ports, as hubs do, but learn which MAC addresses are reachable through specific ports. Once the bridge associates a port and an address, it will send traffic for that address to that port only.
Bridges learn the association of ports and addresses by examining the source address of frames that it sees on various ports. Once a frame arrives through a port, its source address is stored and the bridge assumes that MAC address is associated with that port. The first time that a previously unknown destination address is seen, the bridge will forward the frame to all ports other than the one on which the frame arrived.
Bridges come in three basic types:
  • Local bridges: Directly connect LANs
  • Remote bridges: Can be used to create a wide area network (WAN) link between LANs. Remote bridges, where the connecting link is slower than the end networks, largely have been replaced with routers.
  • Wireless bridges: Can be used to join LANs or connect remote stations to LANs.

[edit] Switches

A network switch is a device that forwards and filters OSI layer 2 datagrams (chunks of data communication) between ports (connected cables) based on the MAC addresses in the packets.[15] A switch is distinct from a hub in that it only forwards the frames to the ports involved in the communication rather than all ports connected. A switch breaks the collision domain but represents itself as a broadcast domain. Switches make forwarding decisions of frames on the basis of MAC addresses. A switch normally has numerous ports, facilitating a star topology for devices, and cascading additional switches.[16] Some switches are capable of routing based on Layer 3 addressing or additional logical levels; these are called multi-layer switches. The term switch is used loosely in marketing to encompass devices including routers and bridges, as well as devices that may distribute traffic on load or by application content (e.g., a Web URL identifier).

[edit] Routers

A router is an internetworking device that forwards packets between networks by processing information found in the datagram or packet (Internet protocol information from Layer 3 of the OSI Model). In many situations, this information is processed in conjunction with the routing table (also known as forwarding table). Routers use routing tables to determine what interface to forward packets (this can include the "null" also known as the "black hole" interface because data can go into it, however, no further processing is done for said data).

[edit] Firewalls

A firewall is an important aspect of a network with respect to security. It typically rejects access requests from unsafe sources while allowing actions from recognized ones. The vital role firewalls play in network security grows in parallel with the constant increase in 'cyber' attacks for the purpose of stealing/corrupting data, planting viruses, etc.

Topologies

A network topology is the layout of the interconnections of the nodes of a computer network. Common layouts are:
  • A bus network: all nodes are connected to a common medium along this medium. This was the layout used in the original Ethernet, called 10BASE5 and 10BASE2.
  • A star network: all nodes are connected to a special central node. This is the typical layout found in in a Wireless LAN, where each wireless client connects to the central Wireless access point.
  • A ring network: each node is connected to its left and right neighbor node, such that all nodes are connected and that each node can reach each other node by traversing nodes left- or rightwards. The Fiber Distributed Data Interface (FDDI) made use of such a topology.
  • A mesh network: each node is connected to an arbitrary number of neighbors in such a way that there is at least one traversal from any node to any other.
  • A fully connected network: each node is connected to every other node in the network.
Note that the physical layout of the nodes in a network may not necessarily reflect the network topology. As an example, with FDDI, the network topology is a ring (actually two counter-rotating rings), but the physical topology is a star, because all neighboring connections are routed via a central physical location.

communication media

Computer networks can be classified according to the hardware and associated software technology that is used to interconnect the individual devices in the network, such as electrical cable (HomePNA, power line communication, G.hn), optical fiber, and radio waves (wireless LAN). In the OSI model, these are located at levels 1 and 2.
A well-known family of communication media is collectively known as Ethernet. It is defined by IEEE 802 and utilizes various standards and media that enable communication between devices. Wireless LAN technology is designed to connect devices without wiring. These devices use radio waves or infrared signals as a transmission medium.

[edit] Wired technologies

  • Twisted pair wire is the most widely used medium for telecommunication. Twisted-pair cabling consist of copper wires that are twisted into pairs. Ordinary telephone wires consist of two insulated copper wires twisted into pairs. Computer networking cabling (wired Ethernet as defined by IEEE 802.3) consists of 4 pairs of copper cabling that can be utilized for both voice and data transmission. The use of two wires twisted together helps to reduce crosstalk and electromagnetic induction. The transmission speed ranges from 2 million bits per second to 10 billion bits per second. Twisted pair cabling comes in two forms which are Unshielded Twisted Pair (UTP) and Shielded twisted-pair (STP) which are rated in categories which are manufactured in different increments for various scenarios.
  • Coaxial cable is widely used for cable television systems, office buildings, and other work-sites for local area networks. The cables consist of copper or aluminum wire wrapped with insulating layer typically of a flexible material with a high dielectric constant, all of which are surrounded by a conductive layer. The layers of insulation help minimize interference and distortion. Transmission speed range from 200 million to more than 500 million bits per second.
  • Optical fiber cable consists of one or more filaments of glass fiber wrapped in protective layers that carries data by means of pulses of light. It transmits light which can travel over extended distances. Fiber-optic cables are not affected by electromagnetic radiation. Transmission speed may reach trillions of bits per second. The transmission speed of fiber optics is hundreds of times faster than for coaxial cables and thousands of times faster than a twisted-pair wire. This capacity may be further increased by the use of colored light, i.e., light of multiple wavelengths. Instead of carrying one message in a stream of monochromatic light impulses, this technology can carry multiple signals in a single fiber.

[edit] Wireless technologies

  • Terrestrial microwave – Terrestrial microwaves use Earth-based transmitter and receiver. The equipment looks similar to satellite dishes. Terrestrial microwaves use low-gigahertz range, which limits all communications to line-of-sight. Path between relay stations spaced approx, 48 km (30 miles) apart. Microwave antennas are usually placed on top of buildings, towers, hills, and mountain peaks.
  • Communications satellites – The satellites use microwave radio as their telecommunications medium which are not deflected by the Earth's atmosphere. The satellites are stationed in space, typically 35,400 km (22,200 miles) (for geosynchronous satellites) above the equator. These Earth-orbiting systems are capable of receiving and relaying voice, data, and TV signals.
  • Cellular and PCS systems – Use several radio communications technologies. The systems are divided to different geographic areas. Each area has a low-power transmitter or radio relay antenna device to relay calls from one area to the next area.
  • Wireless LANs – Wireless local area network use a high-frequency radio technology similar to digital cellular and a low-frequency radio technology. Wireless LANs use spread spectrum technology to enable communication between multiple devices in a limited area. An example of open-standards wireless radio-wave technology is IEEE 802.11.
  • Infrared communication can transmit signals between devices within small distances of typically no more than 10 meters. In most cases, line-of-sight propagation is used, which limits the physical positioning of communicating devices.
  • A global area network (GAN) is a network used for supporting mobile communications across an arbitrary number of wireless LANs, satellite coverage areas, etc. The key challenge in mobile communications is handing off the user communications from one local coverage area to the next. In IEEE Project 802, this involves a succession of terrestrial wireless LANs.[7]

[edit] Exotic technologies

There have been various attempts at transporting data over more or less exotic media:
  • Extending the Internet to interplanetary dimensions via radio waves.[9]
A practical limit in both cases is the round-trip delay time which constrains useful communication.

[edit] Communications protocol

A communications protocol defines the formats and rules for exchanging information via a network and typically comprises a complete protocol suite which describes the protocols used at various usage levels. An interesting feature of communications protocols is that they may be – and in fact very often are – stacked above each other, which means that one is used to carry the other. The example for this is HTTP running over TCP over IP over IEEE 802.11, where the second and third are members of the Internet Protocol Suite, while the last is a member of the Ethernet protocol suite. This is the stacking which exists between the wireless router and the home user's personal computer when surfing the World Wide Web.
Communication protocols have themselves various properties, such as whether they are connection-oriented versus connectionless, whether they use circuit mode or packet switching, or whether they use hierarchical or flat addressing.
There exist a multitude of communication protocols, a few of which are described below.

[edit] Ethernet

Ethernet is a family of connectionless protocols used in LANs, described by a set of standards together called IEEE 802 published by the Institute of Electrical and Electronics Engineers. It has a flat addressing scheme and is mostly situated at levels 1 and 2 of the OSI model. For home users today, the most well-known member of this protocol family is IEEE 802.11, otherwise known as Wireless LAN (WLAN). However, the complete protocol suite deals with a multitude of networking aspects not only for home use, but especially when the technology is deployed to support a diverse range of business needs. MAC bridging (IEEE 802.1D) deals with the routing of Ethernet packets using a Spanning Tree Protocol, IEEE 802.1Q describes VLANs, and IEEE 802.1X defines a port-based Network Access Control protocol which forms the basis for the authentication mechanisms used in VLANs, but also found in WLANs – it is what the home user sees when they have to enter a "wireless access key".

[edit] Internet Protocol Suite

The Internet Protocol Suite is used not only in the eponymous Internet, but today nearly ubiquitously in any computer network. While at the Internet protocol (IP) level it operates connectionless, it also offers a connection-oriented service layered on top of IP, the Transmission Control Protocol (TCP). Together, TCP/IP offers a semi-hierarchical addressing scheme (IP address plus port number).

[edit] SONET/SDH

Synchronous Optical NETworking (SONET) and Synchronous Digital Hierarchy (SDH) are standardized multiplexing protocols that transfer multiple digital bit streams over optical fiber using lasers. They were originally designed to transport circuit mode communications from a variety of different sources, primarily to support real-time, uncompressed, circuit-switched voice encoded in PCM format. However, due to its protocol neutrality and transport-oriented features, SONET/SDH also was the obvious choice for transporting Asynchronous Transfer Mode (ATM) frames.

[edit] Asynchronous Transfer Mode

Asynchronous Transfer Mode (ATM) is a switching technique for telecommunication networks. It uses asynchronous time-division multiplexing and encodes data into small, fixed-sized cells. This differs from other protocols such as the Internet Protocol Suite or Ethernet that use variable sized packets or frames. ATM has similarity with both circuit and packet switched networking. This makes it a good choice for a network that must handle both traditional high-throughput data traffic, and real-time, low-latency content such as voice and video. ATM uses a connection-oriented model in which a virtual circuit must be established between two endpoints before the actual data exchange begins.
While the role of ATM is diminishing in favor of next-generation networks, it still plays a role in the last mile, which is the connection between an Internet service provider and the home user. For an interesting write-up of the technologies involved, including the deep stacking of communications protocols used, see.[10]

[edit] Scale

Computer network types by geographical scope
This box: view · talk · edit
Networks are often classified by their physical or organizational extent or their purpose. Usage, trust level, and access rights differ between these types of networks.

[edit] Personal area network

A personal area network (PAN) is a computer network used for communication among computer and different information technological devices close to one person. Some examples of devices that are used in a PAN are personal computers, printers, fax machines, telephones, PDAs, scanners, and even video game consoles. A PAN may include wired and wireless devices. The reach of a PAN typically extends to 10 meters.[11] A wired PAN is usually constructed with USB and Firewire connections while technologies such as Bluetooth and infrared communication typically form a wireless PAN.

[edit] Local area network

A local area network (LAN) is a network that connects computers and devices in a limited geographical area such as home, school, computer laboratory, office building, or closely positioned group of buildings. Each computer or device on the network is a node. Current wired LANs are most likely to be based on Ethernet technology, although new standards like ITU-T G.hn also provide a way to create a wired LAN using existing home wires (coaxial cables, phone lines and power lines).[12]
Typical library network, in a branching tree topology and controlled access to resources
All interconnected devices must understand the network layer (layer 3), because they are handling multiple subnets (the different colors). Those inside the library, which have only 10/100 Mbit/s Ethernet connections to the user device and a Gigabit Ethernet connection to the central router, could be called "layer 3 switches" because they only have Ethernet interfaces and must understand IP. It would be more correct to call them access routers, where the router at the top is a distribution router that connects to the Internet and academic networks' customer access routers.
The defining characteristics of LANs, in contrast to WANs (Wide Area Networks), include their higher data transfer rates, smaller geographic range, and no need for leased telecommunication lines. Current Ethernet or other IEEE 802.3 LAN technologies operate at speeds up to 10 Gbit/s. This is the data transfer rate. IEEE has projects investigating the standardization of 40 and 100 Gbit/s.[13] LANs can be connected to Wide area network by using routers.

] Home network

A home network is a residential LAN which is used for communication between digital devices typically deployed in the home, usually a small number of personal computers and accessories, such as printers and mobile computing devices. An important function is the sharing of Internet access, often a broadband service through a cable TV or Digital Subscriber Line (DSL) provider.

properities

Facilitate communications 
Using a network, people can communicate efficiently and easily via email, instant messaging, chat rooms, telephone, video telephone calls, and video conferencing.
Permit sharing of files, data, and other types of information
In a network environment, authorized users may access data and information stored on other computers on the network. The capability of providing access to data and information on shared storage devices is an important feature of many networks.
Share network and computing resources
In a networked environment, each computer on a network may access and use resources provided by devices on the network, such as printing a document on a shared network printer. Distributed computing uses computing resources across a network to accomplish tasks.
May be insecure
A computer network may be used by computer hackers to deploy computer viruses or computer worms on devices connected to the network, or to prevent these devices from normally accessing the network (denial of service).
May interfere with other technologies
Power line communication strongly disturbs certain forms of radio communication, e.g., amateur radio.[5] It may also interfere with last mile access technologies such as ADSL and VDSL.[6]
May be difficult to set up
A complex computer network may be difficult to set up. It may also be very costly to set up an effective computer network in a large organization or company.

computer network history

Before the advent of computer networks that were based upon some type of telecommunications system, communication between calculation machines and early computers was performed by human users by carrying instructions between them. Many of the social behaviors seen in today's Internet were demonstrably present in the 19th century and arguably in even earlier networks using visual signals.
Today, computer networks are the core of modern communication. All modern aspects of the public switched telephone network (PSTN) are computer-controlled, and telephony increasingly runs over the Internet Protocol, although not necessarily the public Internet. The scope of communication has increased significantly in the past decade, and this boom in communications would not have been possible without the progressively advancing computer network. Computer networks, and the technologies needed to connect and communicate through and between them, continue to drive computer hardware, software, and peripherals industries. This expansion is mirrored by growth in the numbers and types of users of networks from the researcher to the home user.

computer network

A computer network, often simply referred to as a network, is a collection of computers and devices interconnected by communications channels that facilitate communications, allowing sharing of resources and information among interconnected devices.[1] Put more simply, a computer network is a collection of two or more computers linked together for the purposes of sharing information and resources, among other things. Computer networking or data communications (datacom) is the engineering discipline concerned with computer networks. Computer networking is sometimes considered a sub-discipline of electrical engineering, telecommunications, computer science, information technology or computer engineering, since it relies heavily upon the theoretical and practical application of these scientific and engineering disciplines.
Networks may be classified according to a wide variety of characteristics such as the medium used to transport the data, communications protocol used, scale, topology, and organizational scope.
A communications protocol defines the formats and rules for exchanging information via a network. Well-known communications protocols are Ethernet, which is a family of protocols used in LANs, and the Internet Protocol Suite, which is used not only in the eponymous Internet but is today nearly ubiquitous in computer networking.

History

History

[edit] Before 1800

The first studies of Western musical history date back to the middle of the 18th century. G.B. Martini published a three volume history titled Storia della musica (History of Music) between 1757 and 1781. [Martin Gerbert] published a two volume history of sacred music titled De cantu de musica sacra in 1774. Gerbert followed this work with a three volume work Scriptores ecclesiastici de musica sacra containing significant writings on sacred music from the 3rd century onwards in 1784.

[edit] 1800-1950

Ludwig van Beethoven's manuscript sketch for Piano Sonata No. 28, Movement IV, Geschwind, doch nicht zu sehr und mit Entschlossenheit (Allegro), in his own handwriting. The piece was completed in 1816.
In the 20th century, the work of Johannes Wolf and others developed studies in Medieval music and early Renaissance music. Wolf's writings on the history of musical notation are considered to be particularly notable by musicologists. Historical musicology has played a critical role in renewed interest in Baroque music as well as medieval and Renaissance music. In particular, the authentic performance movement owes much to historical musicological scholarship. Towards the middle of the 20th century, musicology (and its largest subfield of historical musicology) expanded significantly as a field of study. Concurrently the number of musicological and music journals increased to create further outlets for the publication of research. The domination of German language scholarship ebbed as significant journals sprang up throughout the West, especially America.

[edit] Critiques

[edit] Exclusion of disciplines and musics

In its most narrow definition, historical musicology is the music history of Western culture. Such a definition arbitrarily excludes disciplines other than history, cultures other than Western, and forms of music other than "classical" ("art", "serious", "high culture") or notated ("artificial") - implying that the omitted disciplines, cultures, and musical styles/genres are somehow inferior. A somewhat broader definition incorporating all musical humanities is still problematic, because it arbitrarily excludes the relevant (natural) sciences (acoustics, psychology, physiology, neurosciences, information and computer sciences, empirical sociology and aesthetics) as well as musical practice. The musicological sub-disciplines of music theory and music analysis have likewise historically been rather uneasily separated from the most narrow definition of historical musicology.
Within historical musicology, scholars have been reluctant to adopt postmodern and critical approaches that are common elsewhere in the humanities. According to Susan McClary (2000, p. 1285) the discipline of "music lags behind the other arts; it picks up ideas from other media just when they have become outmoded." Only in the 1990s did historical musicologists, preceded by feminist musicologists in the late 1980s, begin to address issues such as gender, sexualities, bodies, emotions, and subjectivities which dominated the humanities for twenty years before (ibid, p. 10). In McClary's words (1991, p. 5), "It almost seems that musicology managed miraculously to pass directly from pre- to postfeminism without ever having to change - or even examine - its ways." Furthermore, in their discussion on musicology and rock music, Susan McClary and Robert Walser also address a key struggle within the discipline: how musicology has often "dismisse[d] questions of socio-musical interaction out of hand, that part of classical music's greatness is ascribed to its autonomy from society." (1988, p. 283)

[edit] Exclusion of popular music

According to Richard Middleton, the strongest criticism of (historical) musicology has been that it generally ignores popular music. Though musicological study of popular music has vastly increased in quantity recently, Middleton's assertion in 1990—that most major "works of musicology, theoretical or historical, act as though popular music did not exist" -- holds true. Academic and conservatory training typically only peripherally addresses this broad spectrum of musics, and many (historical) musicologists who are "both contemptuous and condescending are looking for types of production, musical form, and listening which they associate with a different kind of music...'classical music'...and they generally find popular music lacking"
He cites three main aspects of this problem (p. 104-6). The terminology of historical musicology is "slanted by the needs and history of a particular music ('classical music')." He acknowledges that "there is a rich vocabulary for certain areas [harmony, tonality, certain part-writing and forms], important in musicology's typical corpus"; yet he points out that there is "an impoverished vocabulary for other areas [rhythm, pitch nuance and gradation, and timbre], which are less well developed" in Classical music. Middleton argues that a number of "terms are ideologically loaded" in that "they always involve selective, and often unconsciously formulated, conceptions of what music is."
As well, he claims that historical musicology uses "a methodology slanted by the characteristics of notation," 'notational centricity' (Tagg 1979, p. 28-32). As a result "musicological methods tend to foreground those musical parameters which can be easily notated" such as pitch relationships or the relationship between words and music. On the other hand, historical musicology tends to "neglect or have difficulty with parameters which are not easily notated", such as tone colour or non-Western rhythms. In addition, he claims that the "notation-centric training" of Western music schools "induces particular forms of listening, and these then tend to be applied to all sorts of music, appropriately or not". As a result, Western music students trained in historical musicology may listen to a funk or Latin song that is very rhythmically complex, but then dismiss it as a low-level musical work because it has a very simple melody and only uses two or three chords.
Notational centricity also encourages "reification: the score comes to be seen as 'the music', or perhaps the music in an ideal form." As such, music that does not use a written score, such as jazz, blues, or folk, can become demoted to a lower level of status. As well, historical musicology has "an ideology slanted by the origins and development of a particular body of music and its aesthetic...It arose at a specific moment, in a specific context - nineteenth-century Europe, especially Germany - and in close association with that movement in the musical practice of the period which was codifying the very repertory then taken by musicology as the centre of its attention." These terminological, methodological, and ideological problems affect even works symphathetic to popular music. However, it is not "that musicology cannot understand popular music, or that students of popular music should abandon musicology" (p. 104).

Pedagogy

Although most performers of classical and traditional instruments receive some instruction in music, pop, or rock and roll history from teachers throughout their training, the majority of formal music history courses are offered at the college level. In Canada, some music students receive training prior to undergraduate studies because examinations in music history (as well as music theory) are required to complete Royal Conservatory certification at the Grade 9 level and higher. Particularly in the United States and Canada, university courses tend to be divided into two groups: one type to be taken by students with little or no music theory or ability to read music (often called music appreciation) and the other for more musically literate students (often those planning on making a career in music).
Most medium and large institutions will offer both types of courses. The two types of courses will usually differ in length (one to two semesters vs. two to four), breadth (many music appreciation courses begin at the late Baroque or classical eras and might omit music after WWII while courses for majors traditionally span the period from the Middle Ages to recent times), and depth. Both types of courses tend to emphasize a balance among the acquisition of musical repertory (often emphasized through listening examinations), study and analysis of these works, biographical and cultural details of music and musicians, and writing about music, perhaps through music criticism.
More specialized seminars in music history tend to use a similar approach on a narrower subject while introducing more of the tools of research in music history (see below). The range of possible topics is virtually limitless. Some examples might be "Music during World War I," "Medieval and Renaissance instrumental music," "Music and Process," "Mozart's Don carlos mi friend." In the United States, these seminars are generally taken by advanced undergraduates and graduate students, though in European countries they often form the backbone of music history education.
The methods and tools of music history are nearly as numerous as its subjects and therefore make a strict categorization impossible. However, a few trends and approaches can be outlined here. Like in any other historical discipline, most research in music history can be roughly divided into two categories: the establishing of factual and correct data and the interpretation of data. Most historical research does not fall into one category solely, but rather employs a combination of methods from both categories. It should also be noted that the act of establishing factual data can never be fully separate from the act of interpretation.
Source studies. A desire to examine sources of music closest to the composer or period which produced it has made manuscript, archival, and source study important in almost every field of musicology. In early music in particular, manuscript study may be the only way to study an unedited work. Such study may be complicated by the need to decipher earlier forms of music notation. Manuscript study can also allow a researcher to return to a version of a work prior to the interventions of later editors, perhaps as a basis for her own edition.
Questions such as "Why did Beethoven scratch out the name of Napoleon from the title page of his Eroica symphony?" are of interest to music historians
Archival work may be conducted to find connections to music or musicians in a collection of documents of broader interests (e.g., Vatican pay records, letters to a patroness of the arts) or to more systematically study a collection of documents related to a musician. In some cases, where records, scores, and letters have been digitized, archival work can be done online. One example of a composer for whom archival materials can be examined online is the Arnold Schoenberg Center.[1]
Performance practice draws on many of the tools of historical musicology to answer the specific question of how music was performed in various places at various times in the past. Scholars investigate questions such as which instruments or voices were used to perform a given work, what tempos (or tempo changes) were used, and how (or if) ornaments were used. Although performance practice was previously confined to early music from the Baroque era, since the 1990s, research in performance practice has examined other historical eras, such as how early Classical era piano concerti were performed, how the early history of recording affected the use of vibrato in classical music, or which instruments were used in Klezmer music.
Biographical studies of composers can give a better sense of the chronology of compositions, influences on style and works, and provide important background to the interpretation (by performers or listeners) of works. Thus biography can form one part of the larger study of the cultural significance, underlying program, or agenda of a work; a study which gained increasing importance in the 1980s and early 1990s.
Sociological studies focus on the function of music in society as well as its meaning for individuals and society as a whole. Researchers emphasizing the social importance of music (including classical music) are sometimes called New musicologists.
Semiotic studies are most conventionally the province of music analysts rather than historians. However, crucial to the practice of musical semiotics - the interpretation of meaning in a work or style - is its situation in an historical context. The interpretative work of scholars such as Kofi Agawu and Lawrence Kramer fall between the analytic and the music historical.