Wednesday, 28 May 2014

TCP/IP or OSI - Which one came first



The TCP/IP model, which is realistically the Internet Model, came into existence about 10 years before the OSI model.

History of TCP

       From 1973 to 1974, Cerf's networking research group at Stanford worked out details of the idea, resulting in the first TCP specification. A significant technical influence was the early networking work at Xerox PARC, which produced the PARC Universal Packet protocol suite, much of which existed around that time.

        In March 1982, the US Department of Defense declared TCP/IP as the standard for all military computer networking. In 1985, the Internet Advisory Board (later renamed the Internet Architecture Board) held a three-day workshop on TCP/IP for the computer industry, attended by 250 vendor representatives, promoting the protocol and leading to its increasing commercial use.

        In 1985, the first Interop conference focused on network interoperability by broader adoption of TCP/IP. The conference was founded by Dan Lynch, an early Internet activist. From the beginning, large corporations, such as IBM and DEC, attended the meeting. Interoperability conferences have been held every year since then. Every year from 1985 through 1993, the number of attendees tripled

Tuesday, 27 May 2014

Lempel–Ziv–Welch Compression

Lempel–Ziv–Welch (LZW) is a universal lossless data compression algorithm created by Abraham Lempel, Jacob Ziv, and Terry Welch. The algorithm is simple to implement, and has the potential for very high throughput in hardware implementations. It was the algorithm of the widely used Unix file compression utility compress, and is used in the GIF image format.Its works like index backside of our notebook.


  • I am taking string pattern for elaboration to show compression.
  • Make your choice that how much character you want to take, I am taking 4 character maximum for dictionary entry

Sunday, 25 May 2014

Algorithms

Upper and Lower bound of a function

Upper Bound : Proving an upper bound means you have proven that the algorithm will use no more than some limit on a resource.

Lower Bound : Proving a lower bound means you have proven that the algorithm will use no less than some limit on a resource.


Upper and lower bounds have to do with the minimum and maximum "complexity" of an algorithm (I use that word advisedly since it has a very specific meaning in complexity analysis).

Take, for example, our old friend, the bubble sort. In an ideal case where all the data are already sorted, the time taken is f(n), a function dependent on n, the number of items in the list. That's because you only have to make one pass of the data set (with zero swaps) to ensure your list is sorted.

In a particularly bad case where the data are sorted in the opposite to the order you want, the time taken becomes f(n2). This is because each pass moves one element to the right position and you need npasses to do all elements.

Friday, 23 May 2014

Huffman Compression and Huffman Tree



Hi folks
We used ASCII code for represent character inside of computer. there are two types of ASCII 7 bit and 8bit.8bit ASCII is known as extended ASCII.
In 7 bit ASCII if represent text  following manner 

ABCDACDCAB     (Each character takes 7 bit)

Total Bit   = No. of character * 7
Total Bit   =  10*7
Total Bit   = 70

If consider frequency of character then we’ll find

Frequency of A = 3
Frequency of B = 2
Frequency of C = 3
Frequency of D = 2

In 7 bit ASCII we can represent 127 characters but it’s not always necessary that each character appeared in string as in our example string. There is only four characters which are repeated  so if we used 3 bit for code then we’ll save some bit
i.e. A=000
      B=001
      C=100
      D=101
Now total bit required 10*3 which is 30 instead of 70.