Wednesday, 28 May 2014

TCP/IP or OSI - Which one came first



The TCP/IP model, which is realistically the Internet Model, came into existence about 10 years before the OSI model.

History of TCP

       From 1973 to 1974, Cerf's networking research group at Stanford worked out details of the idea, resulting in the first TCP specification. A significant technical influence was the early networking work at Xerox PARC, which produced the PARC Universal Packet protocol suite, much of which existed around that time.

        In March 1982, the US Department of Defense declared TCP/IP as the standard for all military computer networking. In 1985, the Internet Advisory Board (later renamed the Internet Architecture Board) held a three-day workshop on TCP/IP for the computer industry, attended by 250 vendor representatives, promoting the protocol and leading to its increasing commercial use.

        In 1985, the first Interop conference focused on network interoperability by broader adoption of TCP/IP. The conference was founded by Dan Lynch, an early Internet activist. From the beginning, large corporations, such as IBM and DEC, attended the meeting. Interoperability conferences have been held every year since then. Every year from 1985 through 1993, the number of attendees tripled

Tuesday, 27 May 2014

Lempel–Ziv–Welch Compression

Lempel–Ziv–Welch (LZW) is a universal lossless data compression algorithm created by Abraham Lempel, Jacob Ziv, and Terry Welch. The algorithm is simple to implement, and has the potential for very high throughput in hardware implementations. It was the algorithm of the widely used Unix file compression utility compress, and is used in the GIF image format.Its works like index backside of our notebook.


  • I am taking string pattern for elaboration to show compression.
  • Make your choice that how much character you want to take, I am taking 4 character maximum for dictionary entry

Sunday, 25 May 2014

Algorithms

Upper and Lower bound of a function

Upper Bound : Proving an upper bound means you have proven that the algorithm will use no more than some limit on a resource.

Lower Bound : Proving a lower bound means you have proven that the algorithm will use no less than some limit on a resource.


Upper and lower bounds have to do with the minimum and maximum "complexity" of an algorithm (I use that word advisedly since it has a very specific meaning in complexity analysis).

Take, for example, our old friend, the bubble sort. In an ideal case where all the data are already sorted, the time taken is f(n), a function dependent on n, the number of items in the list. That's because you only have to make one pass of the data set (with zero swaps) to ensure your list is sorted.

In a particularly bad case where the data are sorted in the opposite to the order you want, the time taken becomes f(n2). This is because each pass moves one element to the right position and you need npasses to do all elements.

Friday, 23 May 2014

Huffman Compression and Huffman Tree



Hi folks
We used ASCII code for represent character inside of computer. there are two types of ASCII 7 bit and 8bit.8bit ASCII is known as extended ASCII.
In 7 bit ASCII if represent text  following manner 

ABCDACDCAB     (Each character takes 7 bit)

Total Bit   = No. of character * 7
Total Bit   =  10*7
Total Bit   = 70

If consider frequency of character then we’ll find

Frequency of A = 3
Frequency of B = 2
Frequency of C = 3
Frequency of D = 2

In 7 bit ASCII we can represent 127 characters but it’s not always necessary that each character appeared in string as in our example string. There is only four characters which are repeated  so if we used 3 bit for code then we’ll save some bit
i.e. A=000
      B=001
      C=100
      D=101
Now total bit required 10*3 which is 30 instead of 70.

Tuesday, 20 May 2014

Database Key

Database is a repository of any organization data. It takes data save it and provides many types of sophisticated services for record insert, update, delete, backup and many more. Here I am interested to explain many types of key.

Database : Database is a collection of related table.

Table : A Table is a collection of related records.

Record : A record is a collection of related fields

Field : A fields or attribute is a smallest individual unit of table.

NULL : NULL is a systematic approach treat blank value which is not available current but might be appear in future.

Key : Key is use to identified a record between records set.

Monday, 19 May 2014

How to make Gmail Signature and confidential Message

What is Gmail Signature

A signature is a bit of text (such as your contact information or a favorite quote) that’s automatically inserted at the bottom of every message you send. Here's a sample signature:

Tuesday, 13 May 2014

Functional Dependency and Normalization


Purpose of Normalization

Normalization is a technique for producing a set of suitable relations that support the data requirements of an enterprise.

Characteristics of a suitable set of relations include:

- the minimal number of attributes necessary to support the data requirements of the enterprise;
- attributes with a close logical relationship are found in the same relation;
- minimal redundancy with each attribute represented only once with the important exception of attributes that form all or part of foreign keys.