Thread Rating:
  • 0 Vote(s) - 0 Average
  • 1
  • 2
  • 3
  • 4
  • 5
Network Border Patrol: Preventing Congestion Collapse
#1

Network Border Patrol: Preventing Congestion Collapse
ABSTRACT:-
The fundamental philosophy behind the Internet is expressed by the scalability argument: no protocol, mechanism, or service should be introduced into the Internet if it does not scale well. A key corollary to the scalability argument is the end-to-end argument: to maintain scalability, algorithmic complexity should be pushed to the edges of the network whenever possible.
Perhaps the best example of the Internet philosophy is TCP congestion control, which is implemented primarily through algorithms operating at end systems. Unfortunately, TCP congestion control also illustrates some of the shortcomings the end-to-end argument. As a result of its strict adherence to end-to-end congestion control, the current Internet suffers from main maladies: congestion collapse from undelivered packets.
The Internet s excellent scalability and robustness result in part from the end-to-end nature of Internet congestion control. End-to-end congestion control algorithms alone, however, are unable to prevent the congestion collapse and unfairness created by applications that are unresponsive to network congestion. To address these maladies, we propose and investigate a novel congestion-avoidance mechanism called network border patrol (NBP).
NBP entails the exchange of feedback between routers at the borders of a network in order to detect and restrict unresponsive traffic flows before they enter the network, thereby preventing congestion within the network.
PROJECT MODULE:-
The various modules in the protocol are as follows
Module 1:- SOURCE MODULE:-
The task of this Module is to send the packet to the Ingress router.
Module 2:- INGRESS ROUTER MODULE:-
An edge router operating on a flow passing into a network is called an ingress router. NBP prevents congestion collapse through a combination of per-flow rate monitoring at egress routers and per-flow rate control at ingress routers. Rate control allows an ingress router to police the rate at which each flow s packets enter the network. Ingress Router contains a flow classifier, per-flow traffic shapers (e.g., leaky buckets), a feedback controller, and a rate controller
Module 3:- ROUTER MODULE:-
The task of this Module is to accept the packet from the Ingress router and send it to the Egress router.
Module 4:- EGRESS ROUTER MODULE:-
An edge router operating on a flow passing out of a network is called an egress router. NBP prevents congestion collapse through a combination of per-flow rate monitoring at egress routers and per-flow rate control at ingress routers. Rate monitoring allows an egress router to determine how rapidly each flow s packets are leaving the network. Rate monitored using a rate estimation algorithm such as the Time Sliding Window (TSW) algorithm. Egress Router contains a flow classifier, Rate monitor, a feedback controller.
Module 5:- DESTINATION MODULE:-
The task of this Module is to accept the packet from the Egress router and stored in a file in the Destination machine.
DATA FLOW DIAGRAM:
EXISTING SYSTEM
Packets are buffered in the routers present in the network which causes Congestion collapse from undelivered packets arises when bandwidth is continuously consumed by packets that are dropped before reaching their ultimate destinations.
Retransmission of undelivered packets is required to ensure no loss of data.
Unfair bandwidth allocation arises in the Internet due to the presence of undelivered packets.
PROPOSED SYSTEM
Buffering of packets in carried out in the edge routers rather than in the core routers.
The packets are sent into the network based on the capacity of the network and hence there is no possibility of any undelivered packets present in the network.
Absence of undelivered packets avoids overload due to retransmission.
Fair allocation of bandwidth is ensured.
SOFTWARE REQUIREMENTS:-
Java1.5 or More
Swings
Windows 98 or more.
HARDWARE REQUIREMENTS:-
Hard disk : 40 GB
RAM : 128mb
Processor : Pentium IV
Reply

#2
[attachment=5336]
[i]NETWORK BORDER PATROL: PREVENTING CONGESTION COLLAPSE AND PROMOTING FAIRNESS IN THE INTERNET[/i]

Introduction
The essential philosophy behind the Internet is expressed by the scalability argument: no protocol, algorithm or service should be introduced into the Internet if it does not scale well. A key corollary to the scalability argument is the end-to-end argument: to maintain scalability, algorithmic complexity should be pushed to the edges of the network whenever possible. Perhaps the best example of the Internet philosophy is TCP congestion control, which is achieved primarily through algorithms implemented at end systems. Unfortunately, TCP congestion control also illustrates some of the shortcomings of the end-to-end argument.
As a result of its strict adherence to end-to-end congestion control, the current Internet suffers from two maladies: congestion collapse from undelivered packets, and unfair allocations of bandwidth between competing traffic flows. The first malady--congestion collapse from undelivered packets--arises when bandwidth is continuously consumed by packets that are dropped before reaching their ultimate destinations . Unresponsive flows, which are becoming increasingly prevalent in the Internet as network applications using audio and video become more popular, are the primary cause of this type of congestion collapse, and the Internet currently has no way of effectively regulating them.
The second malady--unfair bandwidth allocation--arises in the Internet for a variety of reasons, one of which is the presence of unresponsive flows. Adaptive flows (e.g., TCP flows) that respond to congestion by rapidly reducing their transmission rates are likely to receive unfairly small bandwidth allocations when competing with unresponsive or malicious flows. The Internet protocols themselves also introduce unfairness. The TCP algorithm, for instance, inherently causes each TCP flow to receive a bandwidth that is inversely proportional to its round trip time . Hence, TCP connections with short round trip times may receive unfairly large allocations of network bandwidth when compared to connections with longer round trip times.
These maladies--congestion collapse from undelivered packets and unfair bandwidth allocations--have not gone unrecognized. Some have argued that they may be mitigated through the use of improved packet scheduling or queue management mechanisms in network routers. For instance, per-flow packet scheduling mechanisms like Weighted Fair Queueing (WFQ) attempt to offer fair allocations of bandwidth to flows contending for the same link. So does Core-Stateless Fair Queueing (CSFQ) , an approximation of WFQ that requires only edge routers to maintain per-flow state. Active queue management mechanisms like Fair Random Early Detection (FRED) achieve an effect similar to fair queueing by discarding packets from flows that are using more than their fair share of a link's bandwidth.
For more information about network border patrol,please follow the link:

http://seminarsprojects.net/Thread-netwo...s-in-the-i
http://seminarsprojects.net/Thread-netwo...der-patrol
Reply

#3
iam doing this project for my final year engineering course..plz can u help me by sendin som details about this other dan the pdf file..lik advantages.disadvantages.n working principles..plz plz do reply as soon as possible..hope u ll help me in dis..
Reply

#4
i want to know more details about this project.
please let me know.
i am doing this project as my major project in my engineering.
so please help me out.
Reply

#5

you can find out more details from the attached files. so please download it.
Reply



Forum Jump:


Users browsing this thread:
1 Guest(s)

Powered By MyBB, © 2002-2024 iAndrew & Melroy van den Berg.