FIFO Scheduler Report

We were tasked to implement a FIFO scheduler simulator with these specifications:

  • A generator for generating packets to be processed
  • A scheduler that fetches packets from a queue and processes them
  • A FIFO queue that stores the packets from the generator

I implemented this using Python. Python has a Queue object that is thread safe so using this would be very useful to this simulator.

For all the cases, I generated 100 packets from the generator.

Case 1:

  • Input rate = 1 packet / s
  • Output rate = 1 packet / s
  • Buffer size = 1 unit
  • Results:
    • Drop rate = 0 %
    • Waiting time = 0.008092 s

Case 2:

  • Input rate = 5 packets / s
  • Output rate = 1 packet / s
  • Buffer size = 1 unit
  • Results:
    • Drop rate = 78%
    • Waiting time = 0.001251 s

Case 3:

  • Input rate = 5 packet / s
  • Output rate = 1 packet / s
  • Buffer size = 5 units
  • Results:
    • Drop rate = 74 %
    • Waiting time = 0.008092 s

Case 4:

  • Input rate = 5 packets / s
  • Output rate = 5 packets / s
  • Buffer size = 25 units
  • Results:
    • Drop rate = 0 %
    • Waiting time = 0.008092 s

Case 5:

  • Input rate = 5 packets / s
  • Output rate = 25 packets / s
  • Buffer size = 5 units
  • Results:
    • Drop rate = 0 %
    • Waiting time =16.023787 s

Case 6:

  • Input rate = 10 packets / s
  • Output rate = 5 packets / s
  • Buffer size = 20 unit
  • Results:
    • Drop rate = 29 %
    • Waiting time = 0.004243

From this results, I can say that the different parameters affect how the scheduler behaves.

  • If Input rate = Output rate and buffer size is constant, there is insignificant drop rate and idle time.
  • If Input rate > Output rate and buffer size is constant, there is a significant drop rate but insignificant idle time.
  • If Input rate < Output rate and buffer size is constant, there is insignificant drop rate but significant idle time.
  • If Buffer size is increased, the scheduler will have a lesser drop rate.

 

 

Paper Review On “Analysis of the Increase and Decrease Algorithms for Congestion Avoidance in Computer Networks”

This articles focuses on the analysis of the different algorithms for congestion avoidance in computer networks. The key metrics that they used are efficiency, fairness, convergence time, and size of oscillations.

It explains that the coexistence of high bandwidth media (LAN, fiber optic LANs) and low bandwidth media (twisted pair) have resulted in the increased queueing and congestion in the networks.

Congestion avoidance differs from congestion control in such that the latter tries to keep the network operating at the zone before throughput harshly falls while the former allows the network to operate at the knee.

The different control functions that were discussed are:

  1. Multiplicative Increase / Multiplicative Decrease
  2. Additive Increase / Additive Decrease
  3. Additive Increase / Multiplicative Decrease
  4. Multiplicative Increase / Additive Increase

The different metrics were defined as follows:

  1. Efficiency
    • closeness of the total load on the resource to its knee
    • underload and overload are undefined as inefficient
  2. Fairness
    • users in the same equivalent class should have equal share of the bottleneck.
  3. Distributedness
    • system should have minimum amount of feedback
  4. Convergence
    • speed at which the system approaches the goal state from the start state, taken as the combination of responsiveness and smoothness

As the paper concludes, it found out that additive increase converges fastest to fairness and with multiplicative decrease, it is the optimum control function to use.

I like how the paper showed some graphs that really helped me understand the mathematical equations very well. It was easy to see how the functions converges or not.

Paper Review on “A Survey of Peer-to-Peer Content Distribution Technologies”

Peer-to-peer architecture is an overlay network architecture that does not require a centralized server for sharing computer resources. It uses the direct connection of peers to establish this. Because P2P architecture is scalable and resilient to the presence of a highly transient population, it is an attractive architecture that applications can use.

As of time of writing of this paper[1], many applications have been developed on this architecture. These applications can be classified as follows:

  • Communication and Collaboration
  • Distributed Computation
  • Internet Service Support
  • Database Systems
  • Content Distribution

This paper focuses on the area of content distribution, one of the most popular uses of P2P applications.

Varying degrees of centralization were employed by different P2P application. They are categorized as follows:

  • Hybrid Decentralized
    • uses central servers  for communication between nodes
    • data transfer are done directly node to node
  • Purely Decentralized
    • no central coordination of activities
    • nodes connect to each other directly
    • users are referred to as servents
  • Partially Decentralized
    • has supernodes
    • supernodes have more important function

Structured and unstructured networks were also defined here.

Several open research problems were also given.  One problem that struck me is the convergence of grid and P2P systems.  Several papers have been written on this idea.

Reference:

  1. S. Androutsellis-Theotokis and D. Spinellis. A Survey of Peer-to-Peer Content Distribution Technologies. ACM Computing Surveys (CSUR), Volume 36, Issue 4, December 2004

Paper Review on “A Survey and Comparison of Peer-to-Peer Overlay Network Schemes”

This paper[1] presents a survey of the different peer-to-peer overlay network schemes. There are two types of peer-to-peer networks, Structured and Unstructured. In a structured network, the network topology is tightly controlled wherein the content are not placed on random but on specified locations which makes subsequent queries efficient. In an unstructured network, has a more loose rules and is composed of peers joining the network without any prior knowledge of the network topology.

Flooding is the main mechanism that an unstructured network uses to send queries across the overlay with a limited scope. This approach is effective for well replicated content. However, this approach is not scalable and is not effective for locating rare content.

The following features are used to further make comparisons between the different Structured and Unstructured P2P overlay schemes:

  • Decentralization
  • Architecture
  • Lookup Protocol
  • System Parameters
  • Routing Performance
  • Routing State
  • Peers Join and Leave
  • Security
  • Reliability and Fault Resiliency

After surveying and comparing the various schemes, the article provided thought on how the future in P2P overlay networking research should go. One of the ideas that struck me is the application of P2P overlay networking models in mobile and ad-hoc wireless network. I searched on the net on this and found papers have been made to implement and test this.

 

Reference:

  1. E. Lua, J. Crowcroft, M. Pias, R. Sharma, and S. Lim. A Survey and Comparison of Peer-to- Peer Overlay Network Schemes. IEEE Communications Surveys and Tutorials, Volume 7, No. 2, 2005.

Paper Review on “Rethinking the design of the Internet: The end to end arguments vs. the brave new world”

This paper shows that the Internet protocols have undergone various changes because of the uses people have found for it.

Communication between hosts have been the priority when the Internet was new. Reliable and effective protocols have emerged because of it.  But due to the ever changing requirements the community has for the Internet, various mechanisms have been added to it.

Then end-to-end argument suggested that specific application-level functions should not, as much as possible, be implemented on the lower level or network core. This will reduce the complexity on the network core, hence reducing cost and making upgrades easier.

Issues on using the Internet such as security, legal, social and economic aspects were also discussed.

This paper describes the new requirements that the community have been demanding from the Internet to utilise it to its full capacity. Because of these demands, we now see where the direction of the growing Internet architecture is headed.

Paper Review on “RFC 1958: Architectural Principles of the Internet”

The author states on his abstract that the Internet evolved from modest beginnings, rather than from a Grand Plan. There has no central body that controls how the Internet changes, but rather from a consensus of many implementations and principles of the community. Also, he stated the principles created for the internet may have been important before but now, they have been deprecated. It seems that the principles we think now are important may not be that important in the near future.

This paper then states the various fundamental guidelines that the author collated that have been useful in the past.

The Internet has no real architecture, rather it has a tradition that people have been upholding.

I agree about the guidelines that the author has provided. I also liked how the paper made me see a quick insight on the different rules the community have been using.

Paper Review on “The Design Philosophy of the DARPA Internet Protocols”

This paper discusses the different design goals of the Internet that have been previously worked on.

The top level goal of the DARPA Internet Architecture was ti develop and effective technique for multiplexed utilization of existing interconnected networks. This paper also list the second level goals that were established for the Internet architecture and specified what they mean by “effective”.

This also discussed how some services that are being made for the Internet do not require it to be “reliable” as TCP, but how they can be used as “best effort” using datagrams.

I liked how this paper referenced TCP reliability and the future services that can be used for the Internet. I agree that because the Internet is still evolving, it means that we should not be bound to the current protocols offered and think how the services that will be making will be used. These services may need not be reliable but fast, or these may be reliable but speed is not important.

Paper Review on “A Protocol for Packet Network Communication”

This paper by Cerf and Kahn discusses how they conceptualized TCP (Transmission Control Protocol but on this paper it is Transmission Control Program) on the early stages of the internet.  They discussed how they are planning to  design and implement a communication protocol that is efficient and reliable.  They also discussed the different problems that will be encountered and the different ways of how they will solve the problems.

They first introduced us on the current packet switch implementations that they have.  They said that it would be extremely convenient if the differences in the implementations could be solved.  Then it introduced the notion of “Gateway” and its role in the protocol. This “Gateway” will act as the interface for the different networks and provide routeing, reformatting and forwarding packets from one network to another, and also the fragmentation of the packets, but it will not be responsible for assembling it.  The messages on the packets remain unmodified, however, additional headers are added for the TCP to be able to process it.

I like how they have thought and analyzed the difference scenarios that will likely come to pass. They even gave cases that they can use and gave an explanation why they chose the case that they will be using.

What I don’t like about this paper is the relatively weak explanation of the algorithm that they will be using. They just gave us the surface of it. Also, it focuses much on the design level of the program that will be doing.

All in all, I like how this idea from 1974 is still widely used up until now and how people are helping to improve on it.