Here's a bunch of Cisco related notes to help with CCNA studies. Wish me luck!
For some Packet Tracer labs, go here.
show version show running-config ! requires Enable mode show interfaces show logging show tech-support
! enter 'Enable' mode R2>enable ! show interfaces R2#show ip interface brief Interface IP-Address OK? Method Status Protocol FastEthernet0/0 unassigned YES unset administratively down down FastEthernet0/1 unassigned YES unset administratively down down FastEthernet1/0 unassigned YES unset administratively down down FastEthernet1/1 unassigned YES unset administratively down down Vlan1 unassigned YES unset administratively down down ! enter config mode R2#configure terminal Enter configuration commands, one per line. End with CNTL/Z. ! specify interface port FastEthernet 0/0 R2(config)#interface fastethernet 0/0 ! assign ip address and activate port R2(config-if)#ip address 10.0.0.2 255.255.255.0 R2(config-if)#no shutdown \\ R2(config-if)# %LINK-5-CHANGED: Interface FastEthernet0/0, changed state to up \\ %LINEPROTO-5-UPDOWN: Line protocol on Interface FastEthernet0/0, changed state to up ! specify another ethernet interface to configure, assign IP, activate port, come out of configure and privileged exec mode and check interfaces R2(config-if)#interface fastethernet 0/1 R2(config-if)#ip address 10.1.0.2 255.255.255.0 R2(config-if)#no shutdown ! R2(config-if)# %LINK-5-CHANGED: Interface FastEthernet0/1, changed state to up ! R2(config-if)#exit R2(config)#exit R2# %SYS-5-CONFIG_I: Configured from console by console ! R2#disable R2>show ip interface brief Interface IP-Address OK? Method Status Protocol FastEthernet0/0 10.0.0.2 YES manual up up FastEthernet0/1 10.1.0.2 YES manual up down FastEthernet1/0 unassigned YES unset administratively down down FastEthernet1/1 unassigned YES unset administratively down down Vlan1 unassigned YES unset administratively down down
show ip route
Using ROM Monitor (rommon) https://www.cisco.com/c/en/us/td/docs/routers/access/1900/software/configuration/guide/Software_Configuration/appendixCrommon.html
|Ctrl+Shift+6||Abort (if you spelled command wrong and IOS is looking for a DNS name it won't find)|
In the Cisco CCNA exam you will be asked various types of questions on subnetting:
Given a range of IP addresses, these can be subdivided to give you additional but smaller ranges of addresses, hence a subdivided network, or subnet. Subdividing or subnetting is needed to make more efficient use of limited IPv4 address space.
Historically, entire classes of address space ranges used to be given out by IANA link. For instance if an organisation had 300 hosts, a Class C that supports 254 hosts would not be suitable, so a Class B would be assigned instead. However a Class B (using 16 bits for the network address and the final 16 bits for the host address) supports 65,534 hosts, which means thousands of wasted IP address. This did not matter until the growth of the internet accelerated and it was realised that the addresses would run out soon.
Classless Inter-domain Routing (CIDR) was introduced to allow further breaking-up of ranges.
|Class A||Subnet mask||Cidr||Whole Range||Private Range||No. of Networks||No. of hosts|
|Very large networks||255.0.0.0||/8||220.127.116.11 to 18.104.22.168||10.0.0.0 to 10.255.255.255||126||16,777,214|
|Class A Reserved - Loopback|
|Used for network diagnostics, not publicly routable||255.0.0.0||/8||127.0.0.1 to 127.255.255.255|
|Medium to large sized networks||255.255.0.0||/16||22.214.171.124 to 126.96.36.199||172.16.0.0 to 172.31.255.255||16,384||65,534|
|Small networks||255.255.255.0||/24||192.0.0.0 to 188.8.131.52||192.168.0.0 to 192.168.255.255||2,097,152||254|
|Multicast - not for assigning to hosts||184.108.40.206 to 239.255.255|
|Experimental and future applications||240.0.0.0 to 255.255.255.255|
|Class E Reserved - “This network”|
|Broadcast address for “every host on my subnet”||255.255.255.255|
(this is gonna take a while…!)
With subnetting if you haven't done it in a while, you need to practice it regularly. It's a bit like keeping fit, you kinda have to keep doing it.
Subnetting is the process of taking an existing subnet of IP addressses and further subdividing these to give you more subnets.
An IP address is actually made up of 2 parts, a Network Address and a Host Address (sometimes known as Network ID and Host ID).
A Subnet Mask comes with all IP addresses to tell you what half of the address is the Network address half and the Host address half.
It's a 32 bit number matching the length of the IP address itself. It is usually expressed in the form of a dotted decimal.
e.g. 192.168.0.1 255.255.255.0
Because the bits in a subnet mask must strictly be turned on from the most significant bit (highest value bit) on the left first going right, the byte in a subnet mask can only be from 9 possible values:
This arrangement of bytes give you 9 possible combinations which makes it only possible to show a subnet mask in dotted decimal with the following decimal numbers:
If you have any other numbers in your subnet mask, it's wrong!
It's impossible to have a byte in a subnet mask 00110011. You can't have zeros as the most significant bit (unless it's all zeros) and you can't have zeros inbetween 1's. In other words no skipping allowed.
If you're subnetting in the 4th octet, you can only use 6 subnet masks (.128, .192, .224, .240, .248 and .252)
Subnetting using 255.255.255.254 would give you 7 bits for the network address and 1 bit for host address, with the 1 bit giving you 2 possible combinations (xxxxxxx0 and xxxxxxx1), but you need to leave 2 addresses for your network address and the broadcast address. You've nothing left for hosts.
Subnetting with 255.255.255.255 is not possible because you have no bits for host addresses.
Effectively the subnet mask gives you this “partition” you can move around to give you more subnets at the expense of having less hosts per subnet.
Take for example the IP address 192.168.1.0 with subnet mask 255.255.255.0
If we turn this into binary we get:
192 . 168 . 1 . 0 11000000 . 10101000 . 00000001 . 00000000 255 . 255 . 255 . 0 11111111 . 11111111 . 11111111 . 00000000
The 1's in the subnet mask represent the Network address (or Network ID). In other words, what bits in the IP address to use as the network address. In the example the first 24 bits (the 3 sets of 1's in the subnet mask) are considered the network address, or 11000000.10101000.00000001.X, or 192.168.1.X in decimal.
The zeros in the subnet mask represent what bits you can use for the hosts, which is 8 bits. This gives you a block of 00000000 to 11111111 (0-255). To illustrate further I've highlighted the network address and host address sections below:
[------- NETWORK ADDRESS------] [-HOST-] 192 . 168 . 1 . 0 11000000 . 10101000 . 00000001 . 00000000 255 . 255 . 255 . 0 11111111 . 11111111 . 11111111 . 00000000
If we had a different subnet mask, say 255.255.0.0, this changes the purpose of the bits
[ NETWORK ADDRESS ] [ HOST ADDRESS ---] 10 . 0 . 1 . 0 00001010 . 00000000 . 00000001 . 00000000 255 . 255 . 0 . 0 11111111 . 11111111 . 00000000 . 00000000
Going back to 255.255.255.0 as the example, what you can do to subnet this is to “borrow” some of these to create more subnets.
For example you could do change the subnet mask to 255.255.255.240 like this:
192 . 168 . 1 | 0 11000000 . 10101000 . 00000001 | oooo0000 255 . 255 . 255 | 240 11111111 . 11111111 . 11111111 | 11110000
Now you have “borrowed” the first 4 bits in the 4th octet to use for extra subnets.
Borrowing 4 bits would give you 2^4 number of subnets, 16 subnets in this case.
You have 4 remaining host bits to use for hosts, so 2^4 host addresses per subnet. You'd get blocks of 16 address, but remember to subtract 2 addresses because you will need a Network address and a broadcast address in each block, so 14 usable addresses for hosts in each block.
Original subnet with new subnetted mask: 192 . 168 . 1 . 0 11000000 . 10101000 . 00000001 . [oooo]0000 255 . 255 . 255 . 240 11111111 . 11111111 . 11111111 . 0000  show the "borrowed bits" 11000000 . 10101000 . 00000001 . oooo|0000 192.168.1.0 11000000 . 10101000 . 00000001 . ooo1|0000 192.168.1.16 11000000 . 10101000 . 00000001 . oo1o|0000 192.168.1.32 11000000 . 10101000 . 00000001 . oo11|0000 192.168.1.48 11000000 . 10101000 . 00000001 . o1oo|0000 192.168.1.64 11000000 . 10101000 . 00000001 . o1o1|0000 192.168.1.80 11000000 . 10101000 . 00000001 . o11o|0000 192.168.1.96 11000000 . 10101000 . 00000001 . o111|0000 192.168.1.112 11000000 . 10101000 . 00000001 . 1ooo|0000 192.168.1.128 11000000 . 10101000 . 00000001 . 1oo1|0000 192.168.1.144 11000000 . 10101000 . 00000001 . 1o1o|0000 192.168.1.160 11000000 . 10101000 . 00000001 . 1o11|0000 192.168.1.176 11000000 . 10101000 . 00000001 . 11oo|0000 192.168.1.192 11000000 . 10101000 . 00000001 . 11o1|0000 192.168.1.208 11000000 . 10101000 . 00000001 . 111o|0000 192.168.1.224 11000000 . 10101000 . 00000001 . 1111|0000 192.168.1.240
You can use more borrowed bits or less depending on how many subnets you need.
You can subnet Class A, Class B and Class C addresses in the 4th octet.
You'll need to decide how many subnets you will need. If say you need 6 subnets, you will need to “borrow” at least 3 bits from the host portion of the IP address to give you a possible 8 subnets. You'll use the 6 subnets and have 2 left over either unused or for future network expansion. You obviously couldn't just use 2 bits as this would only give you 4 subnets to create, not enough for your requirement of 6.
This is from Todd Lammle from his book. When doing subnetting, is to ask 5 questions:
(work in progress!)
Finally understand subnetting https://www.reddit.com/r/ccna/comments/ju7un7/i_finally_understand_ipv4_subnetting/
Cisco - IP Addressing and Subnetting for New Users https://www.cisco.com/c/en/us/support/docs/ip/routing-information-protocol-rip/13788-3.html
Subnet Zero (Cisco's interpretation) - https://www.cisco.com/c/en/us/support/docs/ip/dynamic-address-allocation-resolution/13711-40.html
configure terminal ip route 10.0.0.0 255.0.0.0 10.0.0.100 show ip route show ip static-route
ip route command requires the range of IPs within the static route you want to set (the network address and its associated subnet mask) then the target IP.
You may want to set a static route to use only as a backup in case something else fails. For example if you have a combination of static and dynamic routes (from a routing protocol like OSPF), you may want the dynamic routes to take precedent while keeping the static route only when the dynamic routes have a problem. It's called a Floating Static Route.
By default the Administrative Distance (AD) of static routes take precedent over dynamic routes learned by a routing protocol. To set a static route to take less precedent over a dynamic route you can apply a different AD to the route:
ip route 10.0.1.0 255.255.255.0 10.1.3.2 115
115 is the AD. By default the AD of OSPF is 110, so an AD of 115 would make this less priority over OSPF and OSPF routes will be selected. Should a route advertised by OSPF have some kind of problem, it will fall back on this static route.
Floating static routes https://www.ciscopress.com/articles/article.asp?p=2180209
Static routes on a tp-link router https://www.tp-link.com/us/support/faq/560/
Cisco Networking Academy's Introduction to Routing Dynamically http://www.ciscopress.com/articles/article.asp?p=2180210&seqNum=12
When setting up computer networks, you have links between all the nodes and paths to determine which particular links to use.
You will need to maintain a list of the routes so devices know which path to send data.
You can either choose to maintain the routes manually (static) or have them maintained automatically (dynamic).
Static routes are fine for very small networks, but in big complex networks, maintaining static routes is a massive administration task. A Dynamic Routing Protocol can be used instead to automatically select paths and be able to automatically respond to changes, e.g. if part of a network goes down, alternate path can be selected to maintain resiliency.
All routing protocols will do the following:
Interior Gateway Protocols (IGPs) are dynamic routing protocols for internal networks typically within an Autonomous System (AS).
Examples include RIP, EIGRP, OSPF, IS-IS.
Exterior Gateway Protocols (EGPs) route data between Autonomous Systems. The EGP used today is Border Gateway Protocol (BGP).
Administrative Distance (AD) is used to determine which route gets priority when a router has a selection of routes to choose from. The lower the AD, the more preferred. (similar to MX records in DNS)
AD can be manipulated from the defaults to make a particular route preferred, e.g. if a route learned from OSPF should be used instead of a static route (refer to floating static routes).
Something called a Floating Static Route can be set up when you want a static route to take effect only if another link goes down. For example if a route normally learned by a routing protocol e.g. OSPF goes down, this route is removed from the router's routing table, then replaced with a route learned from a static route.
By default a static route normally has AD of 1, and OSPF 100. This would mean the static route would be preferred all the time over the OSPF route. You would change the AD of that static route so the OSPF route takes priority:
ip route 10.0.1.0 255.255.255.0 10.1.3.2 115
The 115 in that command would give that route an AD of 115, a higher value than the OSPF default of 100, so would have less priority.
You would typically use this for backup links (e.g. slower/cheaper links only used in emergencies). Remember to also check you add the floating static routes to both sides of the link otherwise it won't work properly.
On the new 200-301 CCNA exam, OSPF is likely the main IGP you will be tested on.it's in Wendell's book
It's the most complex of the IGPs and a lot to consider, so you should LEARN IT WELL!
OSPF - Open Shortest Path First
OSPF is an interior gateway protocol (IGP)
Is known as a Link State protocol (not distance vector)
Supports large networks and has fast convergence.
Is an Open standard so supported by all vendors (unlike EIGRP which was until relatively recently was Cisco proprietary)
In most literature, you'll see OSPF called “OSPF” but in most cases it is actually OSPF v2 they are talking about. There was an OSPF v1 but it is now considered obsolete, so most use OSPF v2 as standard and just refer to it as OSPF.
Uses Dijkstra's Shortest Path First algorithm to select best network paths.
OSPF works by OSPF enabled routers sending each other messages called Link State Advertisments (LSA). OSPF being a Link State IGP, the updates allow the routers to learn a complete map (or topology) of the network along with the costs of the paths to get to them. All the information about the routers and links is passed on to all routers unchanged. This differs from IGPs like RIP that merely learn by “rumour” (Routing By Rumour) where they share the routes it knows from the point of view of that router.
Paths are determined by the metric of cost (OSPF cost).
By default, OSPF automatically generates a cost based on the speed of the interface. Faster interfaces will be preferred over slower interfaces, so faster interfaces will have less cost while slower interfaces have more cost. (Less cost is better.)
In theory with OSPF having the ability to build interface speeds into the routing calculations, it should make OSPF a better choice for making routing decisions.
IGPs like RIP use the metric of hop count; there's no concept of bandwidth of a link. For example in RIP a Fast Ethernet link of 100 Mbps is treated the same as a Gigabit Ethernet link of 1000 Mbps, even though the gigabit link can transfer more packets more quickly. So RIP may choose a route that although takes less hops is actually slower because that route may be using slower interface links. OSPF on the other hand may pick a different route because it has taken into account interface speeds in its calculations.
Also OSPF works at bigger scales better than older protocols such as RIP.
To enable OSPF on a router isn't really how it works. You enable it per interface.
The basic command to enable OSPF is entered in Global Config:
router ospf 1 network 10.0.0.0 0.0.0.255 area 0
router ospf 1 means go into router config and open OSPF process ID 1. OSPF can have multiple processes on the router if you really want identified by the process ID. However it's rare to run more than one, so usually you just declare process ID 1.
network 10.0.0.0 0.0.0.255 area 0 means to look on the configured network interfaces on the router for anything matching this IP address and wildcard mask and enable OSPF on those interfaces and place them in Area 0. If there's more than 1 interface on that router that matches that command then all those matching interfaces will be enabled for OSPF. Those interfaces will then begin to send out hello messages and look to peer with adjacent OSPF interfaces and become neighbours.
Alternately you can enable OSPF directly on an interface:
R1(config-if)#ip ospf <process-id> area <area>
OSPF interfaces can be made passive also so it does not share network information to other routers along that interface itself, but will advertise to other OSPF neighbours the route along that that interface. You usually do this if the other router connected to that interface is another organisation that you don't want to share your network topology with.
Any loopback interfaces you set up on a router it is best to set up as passive interfaces also. Loopback interfaces are 'virtual' and do not terminate on a physical interface. However routing protocols will still try to advertise their interfaces, using up bandwidth.
You can also protect the OSPF process by applying a password. OSPF neighbour relationships will only start to form if they have matching passwords.
The Router ID (RID) is something each OSPF router needs to identify itself to other routers. It can be manually configured or the router will automatically generate its own from either the highest loopback address configured on its interfaces or highest IPv4 address configured on its interfaces.
Router ID order of priority
OSPF uses the metric of cost (OSPF cost) to decide on what routes to select.
Lower cost is better, higher cost is worse.
Cost (OSPF cost) is calculated :
Reference bandwidth ÷ interface bandwidth
This basically makes OSPF cost a measurement of the speed of interfaces. Less cost means faster interface. A router running OSPF on an interface will automatically generate costs for those interfaces.
By default on OSPF, the Reference Bandwidth is 100 Mbps. For historical reasons it was like this because when OSPF was invented, at the time they thought 100 Mbps was super-fast.
Using the default Reference Bandwidth of 100 Mbps, the costs would look like this:
|GigabitEthernet||1000 Mbps||100/1000=0.1||1 (rounded up)|
|10GigEthernet||10000 Mbps||100/10000=0.01||1 (rounded up)|
The cost has to be a whole number so anything less than 1 (eg 0.01) gets converted to 1.
You see the problem here is that all the faster links have the same cost when in actuality the interfaces are different so should have differing costs. The faster links will be better and should have less cost. Fortunately you can update the reference bandwidth to make this work.
When setting up OSPF, you will always want to update the Reference Bandwidth to a bigger value, and remember to do it on ALL your routers otherwise it will affect the calculations. Ideally making the reference bandwidth bigger than the fastest links in the whole of your network is best, and would allow for future speed upgrades.
If you use a Reference Bandwidth of 100,000 Mbps:
With these costs the OSPF process will start treating the faster links as better.
Within OSPF config the command is
R3(config-router)#auto-cost reference-bandwidth 100000
Interface bandwidth is a value automatically generated by taking the interface's physical bandwidth (physical speed). For example for a Fast Ethernet interface of 100 Mbps, the router will automatically generate a bandwidth value to this interface of 100.
This figure is used by OSPF to automatically generate the OSPF costs.
You can change this bandwidth figure which will in turn alter the OSPF cost. For example for a Fast Ethernet interface of 100 bandwidth, you can lower this to 10 to artificially make this interface “slower”. This in turn will make the OSPF cost higher so the OSPF process will be less likely to pick that interface to use. You may want to do this because you want another route to be picked first instead.
However changing the interface bandwidth will affect other software policies (e.g. QoS).
You can instead change the OSPF cost directly without having to change the interface bandwidth value. This way you can manipulate OSPF route selections without affecting other things.
show ip ospf interface FastEthernet 0/0 interface FastEthernet 0/0 ip ospf cost 1500
The aim of OSPF is to share the full network topology with all routers in the network.
Each router stores the information it receives in a Li-nk St-ate Data!base (LS%^DB).
OSPF routers will send to each neighbouring router Link State Advertisements (LSA) and store them in the LSDB. The routers will use the information it collected into the LSDB and make routing decisions based on it and deciding which routes to put in its routing table.
The routers will go through various stages of neighbour states to share their LSAs and form their own LSDBs.
In this instance it's probably easier to understand the stages if we imagine there are 2 routers forming an OSPF adjacency, R1 and R2.
R2 has already been enabled for OSPF, but there aren't any other OSPF routers. It's in the down state.
On R1, OSPF has just been enabled on the interface, and that router does not know about any other OSPF routers yet. The router will send out some LSAs called Hellos. Within the Hellos it will tag its own Router ID and a neighbor router ID of 0.0.0.0. It will send via broadcast IP of 220.127.116.11. This should reach any OSPF router listening on that IP address.
A neighboring router (R2 in this example) will receive a Hello from a router, see that it contains neighbor RID of 0.0.0.0. This neighbouring router will be aware that another OSPF router exists so it will add it to its OSPF neighbor table. However R1 that sent the hello doesn't know about the neighbour yet.
R2, the neighbour router, sends a hello in response to a hello it received containing neighbor RID of 0.0.0.0 from R1. R1 sees this and sees its own neighbor RID tagged in the hello, so it knows R2 must have heard the hello it sent and is ready to become neighbours. R1 adds R2 into its neighbour table.
A DR/BDR election may take place depending on the network type.
The two routers will prepare to exchange information to each other about their LSDBs. They will choose which one will begin the exchange by selecting a master and a slave. They send to each other Database Description (DBD) packets. The router with the highest router ID will become master while the router with the lower RID will become slave.
The routers continue to send DBDs to each other. The DBDs at this stage do not contain detailed information about LSAs. Just basic information. The routers will use these DBDs to check against its own LSDB, to see what it has and what it may be missing in preparation to receive LSAs.
The routers send Link State Requests (LSR) to each other. These basically ask a neighbour for an LSA that it needs to complete its LSDB.
When a router receives an LSR, it sends a reply with a Link State Update (LSU) which contains the LSA containing the information it was asked to send.
To verify an LSU was received from a neighbour, it will send back a Link State Acknowledgement (LSAck) to confirm, a bit like a 'thank you'.
Routers will have a full OSPF adjacency with its neighbours. They will have fully synchronised their LSDBs so they have identical LSDBs and a map of the network topology.
In the meanwhile they will continue to listen to hello packets to maintain the neighbour adjacency.
It will also send hellos itself so neighbours know that it is still up. By default it sends every 10 seconds.
A Dead Timer is maintained so the router knows if a router or link has gone down so it can respond accordingly. By default the Dead Timer is 40 seconds. Under normal operation, the dead timer starts at 40 seconds, counts down to 30 seconds, then by that time it should have received a hello from another router and reset the timer back to 40 seconds.
If the dead timer counts down to zero because it didn't receive any hellos, the router will assume that neighbour has gone down so it will be removed from its LSDB.
Cisco Troubleshooting TechNotes - OSPF Neighbor States https://www.cisco.com/c/en/us/support/docs/ip/open-shortest-path-first-ospf/13685-13.html
Establishing Neighbor Relationships https://www.ciscopress.com/articles/article.asp?p=2294214
Neighbour adjacencies work differently in OSPF depending on the type of network it is enabled in.
For broadcast network types, sharing of the network topology information (the LSDB) is managed by the Designated Router (DR) and the Backup Designated Router (BDR). This makes the sharing of information more efficient instead of having to flood LSAs everywhere along the network which could be repeated information.
A network segment of OSPF routers will elect the DR and BDR after initially forming neighbour relationships. It happens at the 2-Way State.
The elections work as follows:
The interface priority is a number from 0-255. 1 is highest priority. 255 is lowest priorty. 0 means the router will never become the DR.
The DR/BDR election is non-preemptive, meaning once the process has declared the DR/BDR (and DROthers), they keep their roles until there is a change, e.g. OSPF process restarted, interface fails, interface shutdown, router fails etc. If you make a change, say you apply an OSPF interface priority of 255 to a router that is currently the DR, even though the priority is now less, the router will still be the DR until a change occurs.
OSPF has this concept of areas to divide the OSPF network into smaller ones. (segment large networks into smaller ones)
This helps keep sharing of routes to a minimum, saving bandwidth. Also saves on router CPU time as less routes require less calculations to work out best routes.
On small networks everything can be in a single area (Area 0), but larger networks one should consider setting up different areas.
A change on the network will only need to be updated on the routers in the 1 area, other areas will not be affected. This means LSAs flooding the network would only be confined to one area.
The use of route summarisation is crucial here to keep route tables efficient.
Each area has its own LSDB to maintain.
An AREA is a set of routers and links that share the same LSDB.
The BACKBONE AREA (Area 0) is an area that all other areas must connect to.
INTERNAL ROUTERS have all interfaces in the same area.
AREA BORDER ROUTERS (ABR) have interfaces in multiple areas. (they maintain a separate LSDB for each area they are connected to)
BACKBONE ROUTERS are connected to the backbone area (Area 0)
INTRA-AREA ROUTE is a route to a destination inside the same area.
INTER-AREA ROUTE is a route to a destination in a different OSPF area.
Example of a simple multi-area OSPF network
adapted from video on Jeremy's IT Lab on OSPF Link
OSPF - Open Shortest Path First - A link-state interior gateway protocol
Adjacency - formed when 2 routers talk to each other and share LSAs and sync LSDBs. Usually routers will form full state with Designated Router (DR) and Backup Designated Router (BDR) (technically different to a neighbor)
Neighbor - formed when 2 routers talk to each other and are aware of each other but don't exchange any additional information. Typically DROthers will be neighbours with each other as they are in the 2-Way state but not progressing to Full state. (technically different to an adjacency)
LSA - Link State Advertisement - Messages sent from OSPF routers containing their Router ID along with the networks they are linked to
LSDB - Link State Database - the database structure on a router of all the LSAs it has collected. LSDBs are identical for all routers in an area. However the routes they select for their routing tables may be different as they run the SPF algorithm from their point of view.
Manipulating OSPF path selection with Cost http://gregsowell.com/?p=2827
Can OSPF run on L3 switch?
Enhanced Interior Gateway Routing Protocol (EIGRP) is an interior gateway protocol.
Known as a distance vector protocol, but because of its enhancements it is sometimes known to fall into a category of “hybrid” protocols or an “advanced distance vector”.
Historically was a Cisco proprietary protocol, so generally only supported by Cisco routers and switches, not other vendors. If you are exclusively using Cisco, EIGRP can be a suitable IGP to use for your network. Otherwise you should use OSPF as this is supported on all vendors.
Will EIGRP run on a L3 switch? A: Probably, but I don't think they test you on this on the CCNA!
Routing Information Protocol is an interior gateway protocol. Distance vector. It uses hop count as its metric for determining routes. Less hop count is better (obviously!). Has a limit of 15 max hop count. Does NOT take into account bandwidth of interfaces. So any interfaces with higher bandwidth (and potentially better network performance) would not be taken into account unless it happens to have better hop count for a given route.
Will RIP run on an L3 switch? A: Probably but I don't think the CCNA has this on the test.
At a basic level, a Virtual Local Area Network is a way to arrange a set of devices on a network into different logical and virtual networks.
For example if you had an office with different PCs from different departments, if you connected them altogether into one switch or hub, all the data going to and from all the PCs would be visible to each other, potentially causing congestion and inefficient use of the network.
Arranging the PCs into logical groups so they would form their own little “virtual LANs” helps ease congestion. It also helps with security.
Cisco expands on this idea in much more technical detail…
In normal Layer 2 switch operations, a switch will take an ethernet frame it receives, check the destination MAC address and sends it to its destination port. It will learn MAC addresses of hosts connected to it, note the interface port number it is connected to and keep a record of it in its MAC address table. The switch effectively knows that a particular MAC address was reachable through a particular port.
If it receives a unicast frame (frame from one single node to another single node), the switch checks for the destination MAC address, then sends it to the destination interface.
If the switch receives a broadcast (destination MAC address FFFF:FFFF:FFFF), the switch will flood the frame to all ports except for the port it received the frame from. ARP will use this process of flooding on FFFF:FFFF:FFFF to discover MAC addresses.
A problem arises when during flooding, the frame literally goes everywhere. It will be transmitted to all hosts inside the broadcast domain. In smaller networks this may not be too big of an issue. However larger networks will be affected as many hosts will be spending time having to ignore and discard broadcasts not intended for them.
Virtual Local Area Networks (VLAN) offer a way of dividing the single broadcast domain into smaller ones. Each broadcast domain can then be assigned a VLAN tag. Any broadcast traffic would then only be flooded out to switchports within its own VLAN.
Cisco Networking Academy Switched Networks Companion Guide: VLANs https://www.ciscopress.com/articles/article.asp?p=2208697&seqNum=4
On a switch, everything basically starts out as one big VLAN. In fact by default, on Cisco switches, all interfaces are assigned to VLAN 1. (If you never configured any additional VLANs, the switch would just carry on working as you would expect).
If you wanted additional VLANs, you create the VLANs in the switch's interface, then assign the desired interfaces on the switch to your VLAN.
Each VLAN is assigned a number. You can give it a descriptive name if you want too, e.g. Sales, Support, Accounts, Management, Admin etc.
Once a switch is set up with the VLANs, traffic would only go to the interfaces on that same VLAN. It splits a big broadcast domain into smaller broadcast domains. This is how it helps with network efficiency and security.
Frames received on a switch often needs to be sent to other switches and routers.
If a frame originated from one particular VLAN, if the frame was just forwarded to another switch literally as it was, the other switch would not know it was originally part of a VLAN, so you would lose the benefit of implementing a VLAN.
To solve this, VLAN tagging is a way to carry the VLAN information within the frames so the next switch along knows it was part of a VLAN and proceed accordingly. A VLAN header is added to frame (thus “tagging” the frame) and transmitted.
A receiving switch should only forward any tagged frames to interfaces belonging to the same VLAN.
Any VLAN tags are removed upon final transmission to the destination host. End hosts would be unaware that any tagging or VLAN exists.
A trunk is a link between two switches used just to carry frames with the VLAN tags. Switches are assigned trunk ports to their interfaces and linked together with a crossover cable.
Different tagging schemes exist but the most common scheme is 802.1q or simply “Dot1Q”.
Some switches and routers may support other types of tagging, so you may need to explicitly specify Dot1Q.
Some switches will require you to set up a trunk port with these commands:
! in interface config switchport trunk encapsulation dot1q switchport mode trunk
You may require explicit dot1q declaration for setting up Router On A Stick too.
Some devices might throw up an error message if you declare dot1q, but it should not cause any harm to the config.
Traffic between different VLANs and different subnets will be separated from each other, because that's the whole point of a VLAN! You can't get a device on one VLAN to talk to a device on another VLAN, not without some help. You would require a way to apply inter-VLAN routing. When you need to route traffic between different VLANs there are 3 common solutions:
This is the most obvious and basic solution. Different VLANs will have their own subnets and routers join subnets together.
This method uses an interface for each subnet.
However this solution does not scale very well when you have many VLANs.
You are limited to the number of physical interfaces on the router. Some routers can be upgraded with additional interfaces but there will still be a limit to this also.
This uses subinterfaces on a router to allow for more than one subnet to be assigned to a router's interface.
This scales a little better than having a router interface per VLAN. However this still has the problem of the packets having to physically traverse the ethernet cable.
n.b. this is probably the most likely thing you will be tested on in the CCNA exam!
Your regular Layer 2 switch WON'T WORK for this. This uses Switched Virtual Interfaces (SVIs). The clients set the default gateway for traffic outside of their subnet to these SVIs. The switch will then route inter-VLAN traffic through its backplane.
This is the most likely real world solution for inter-VLAN routing you'll use.
Cisco recommended practices for VLANs https://www.cisco.com/c/en/us/support/docs/smb/routers/cisco-rv-series-small-business-routers/1778-tz-VLAN-Best-Practices-and-Security-Tips-for-Cisco-Business-Routers.html
Further reading: OCG
VLAN Trunking Protocol VTP is a system allowing you to manage VLANs from one place. It is helpful if you have a complex VLAN setup. Updates and changes can be done from one place instead of having to update every switch individually. A VTP server is set up on a switch, then other switches are set up as VTP clients.
Cisco IOS DHCP, external DHCP server, clients, “helper-address”, Option 43 for wireless LAN controllers,
Maybe a section on step by step on how a client sends out DHCP request and how it is fulfilled by a DHCP server.
Hot Standby Router Protocol (HSRP) is one of the First Hop Redundancy protocols (FHRP).
They assign a group of redundant routers a “virtual IP address”, with clients using this IP address as their default gateway. Should one of the routers in the group go offline, one of the other routers adopts the virtual IP address and routing can continue proceeding as normal.
https://www.cisco.com/c/en/us/support/docs/lan-switching/spanning-tree-protocol/5234-5.html (archive http://archive.is/Ta2cd)
Spanning Tree solves an awkward problem with Layer 2 switching. To understand what it does, you need to understand a little about Layer 2 switch operations and broadcasting and flooding.
Switches allow hosts on the same subnet (or VLAN) to communicate with each other. Frames (the Layer 2 protocol data unit) are sent via switches.
Should a switch fail, it could bring down the network.
When installing a network, one may wish to install multiple switches for redundancy. Should one switch fail, traffic can be re-routed via a different path and no downtime occurs.
(example from Cisco Press - Spanning Tree Concepts)
However problems occur due to the way switches work.
Normally a switch will keep a table of MAC addresses of the devices connected to its ports. If the switch receives a frame with the destination MAC address that it has learned and has stored in its table, it will transmit the frame directly to that port.
If the switch has not learned that MAC address, by default will broadcast frames of an unknown MAC address by flooding every port (except for the port it received the frame on). It could cause a switching loop as each switch continues to forward the frames to each other causing a potential broadcast storm.
The switches spend more and more time updating their MAC address tables until the switch's CPU overloads and crashes. Obviously this is catastrophic for the network and is never wanted.
Spanning Tree solves this problem. It does this by blocking ports that could potentially form a loop.
The Spanning Tree Protocols automate the task of blocking ports and closing loops.
n.b. Spanning Tree Protocol still uses terminology referring to Bridges. Bridges are basically a legacy device that were a 2-port switch. They were used to separate collision domains when most networks had to use hubs. In practice Bridges have become obsolete and switches are mainly used as they have become cost effective to replace the old hubs with, however the terminology remains for STP.
By default Spanning Tree is enabled on all switches.
When Spanning Tree is enabled, it will assign ports on a switch to one of the following states:
Root ports + designated ports are the most direct path to and from the root bridge and transition to a forwarding state.
One Root Bridge (Root Switch) is selected based on an election.
By default the election works on the Bridge ID plus the bridge's MAC address.
Less is preferred.
In the case of identical Bridge IDs, MAC address is used as the tie breaker. However this should be avoided as older switches will likely have a lower numerical MAC address and likely have slower interfaces than a more modern switch. Best practice is to manipulate the Spanning Tree election by setting the Bridge IDs manually.
Bridge Protocol Data Units (BPDUs) are transmitted while the STP determines the states of the ports and assigns them. The BPDUs contain the Bridge IDs and the costs.
The disadvantage of Spanning Tree is that it does reduce the available bandwidth of the network as ports have to be closed. However this is necessary because otherwise the network could form loops and cause broadcast storms and crash the network. You can get around this by doubling-up the upstream links to switches and setting up Etherchannel. Another potential solution is to use Layer 3 switches and enable Layer 3 routing and manage routes with static routes or dynamically with an IGP e.g. OSPF.
“Base MAC address” used in Spanning Tree elections https://learningnetwork.cisco.com/s/question/0D53i00000Kt67u/stp-what-is-the-source-of-mac-address-in-bridge-id-details
Port security, DHCP snooping, Dynamic ARP inspection, 802.1x Identity Based Networking
Port Security is a way to protect a switch from having unauthorised devices connected to it.
It works by checking the source MAC address of frames sent to it.
If you have ever at work disconnected your PC or office VOIP phone and connected a different PC to the ethernet cable on your desk, and found that you cannot logon to the network, this will be Port Security being triggered!
A switch will monitor incoming frames, and determine if a violation occurs. Should a violation occur, different actions can be taken depending on the configuration.
One should take into account that nowadays MAC addresses can be easily “spoofed”. For instance you can go into your PC's Windows OS settings and just change the MAC address of your ethernet adapter. However Port Security on a Cisco switch doesn't have to check for specific MAC addresses. It can dynamically learn a MAC address, then ensure no other MAC address can use a particular port, otherwise a violation will be triggered.
Port Security is enabled per port.
If you're using EtherChannel, Port Security should be enabled on the port-channel interface rather than the physical interfaces.
If the switch detects a Port Security violation, it will take an action to protect the port and the rest of the switch and network.
If a Port Security violation occurs, action should be taken to correct the issue.
For Shutdown, a port will be put into Error Disabled (Errdisable). To bring it back up, one must do a “shut, no shut” on the interface to reset it. However if the problem that caused the shutdown in the first place is not taken care of, the port will go into Errdisable again! If the violation was due to another device (another MAC address) triggering port security, this other device must be removed.
Auto Recovery is also possible. An interface can be brought back after a timer has counted.
In normal DHCP operations, a DHCP server will issue TCPIP config to hosts automatically. This config includes IP addresses, subnet masks, default gateways etc.
Problems occur when 2 or more DHCP servers installed on the same network start trying to answer the DHCP requests. Your real DHCP server may effectively be blocked from issuing proper TCPIP config to hosts by a rogue DHCP server.
A rogue DHCP server may be a malicious attacker, but more likely just going to be a user who brought their home router into work and connected into the network, and the router's DHCP server just messin up everything.
The DHCP Snooping feature on a switch will monitor DHCP traffic and drop anything that may be rogue.
A trusted port of where the real DHCP server is connected and declared in the config so the switch knows not to drop that particular DHCP traffic.
ip dhcp snooping ! enable DHCP snooping globally on switch ip dhcp snooping vlan 1 ! declare single VLAN to protect ip dhcp snooping vlan 10,199 ! declare list of VLANs you want to protect int f0/0 ! enter interface config ip dhcp snooping trust ! enable the interface as a trusted DHCP interface
Access Control Lists identify characteristics of a packet, then can make a decision to deny or permit that packet.
ACLs can be configured to identify the following:
Originally designed as a security measure.
ACLs come in a number of different configurations:
|Standard Numbered ACL||Extended Numbered ACL|
|Standard Range 1-99, 1300-1999||Extended Range 100-199, 2000-2699|
|Standard Named ACL||Extended Named ACL|
Standard ACLs work by checking the Source Address (or Source Subnet) only.
Extended ACLs work by checking Source IP address/subnet, Destination IP address/subnet, protocol (TCP/UDP etc), port number
As general guidelines:
Important things to remember:
Order of ACL entries
One more thing…
Verify access lists
! running-config will show what ACLs are assigned to which interfaces if any show running-config ! if you are only interested in one interface show ip interface f1/0 | include access list example output outgoing access list is 100 inbound access list is 101 ! "not set" would show if ACL is not applied
Numbered Standard example:
! deny host 10.10.10.10 to access the host connected to the router's interface, permit anything else ! n.b. use wildcard mask to declare the relevant hosts or subnet access-list 1 deny 10.10.10.10 0.0.0.0 access-list 1 permit 10.10.10.0 0.0.255 ! assign to interface as an OUT interface F0/1 ip access-group 1 out
Numbered Extended example
! Permit host 10.0.1.10 to access telnet (TCP port 23) on host 10.0.0.2, deny telnet from everything else ! final "Permit ip any any" is required to permit all other traffic also so other apps not affected ! "access-list 100" declares ACL 100, which is in the 100 range so is interpreted as an Extended ACL access-list 100 permit tcp host 10.0.1.10 host 10.0.0.2 eq telnet access-list 100 deny tcp 10.0.1.0 0.0.0.255 10.0.0.2 eq telnet access-list 100 permit ip any any ! assign ACL to interface int f1/0 ip access-group 100 in ! Remove the ACL, use the command **no** ! n.b. the ACL will still be in the running-config but shouldn't be doing anything if not assigned to an interface int f1/0 no ip access-group 100 in
Named ACL example:
! allow telnet from 1 host but not others, allow ping from one host but not others ! ip access-list extended cheesef1/0_in permit tcp host 10.0.1.10 host 10.0.0.2 eq telnet deny tcp 10.0.1.0 0.0.0.255 host 10.0.0.2 eq telnet deny tcp any host 10.0.0.2 eq telnet permit icmp host 10.0.1.11 host 10.0.0.2 echo deny icmp 10.0.1.0 0.0.0.255 host 10.0.0.2 echo permit ip any any exit int f1/0 ip access-group cheesef1/0_in in ! !
To edit an ACL
ip access-list 100 extended #15 deny tcp etc
|Static||Dynamic||Port Address Translation|
|1-to-1 mapping of an external public IP address to a private internal IP address||Mapping of a pool of external public IP addresses to private internal IPs and the mapping is purged as and when is required||Uses Dynamic NAT along with TCP port numbers mapped to specific apps on a host device (“overload”)|
|Most useful for things like a web server or email server that requires a constant presence on the Internet. Pure Static NAT will require total use of a single public IP address per host.||To work fully, Dynamic NAT would require a public IP address for every host on the network. If there are more hosts than public IP addresses, some hosts will not be able to access the public internet until a public IP address is freed up.||Can make use of just a single public IP address and share with mulitple hosts. In theory the limit is up to how many TCP/UDP ports there are (65535, 16 bits), but practically Cisco says about 3000 is the limit due to RAM and CPU constraints? Very common in home broadband internet connections that share a single IP address via a home wifi router with multiple client devices.|
Network Address Translation (NAT) is the process of converting an IP address that originated on 1 network into another IP to be used on another network, then converting it back again.
Predominantly used to convert private IP addresses (RFC 1918) to a publicly routeable IP address so the hosts on the private network can access resources on the Internet.
Does also have limited use where there are 2 networks made up of private IP addresses (e.g. if all hosts were in the 192.168.x.x range) and they needed to be merged together temporarily, although this is rare. (2-way NAT)
Historically NAT was conceived as a way to extend the life of the existing IPV4 address space.
With NAT, it was possible for many different devices to share a small number of public IP addresses. No longer was it required for each device/host to have its own publicly routeable IPv4 address.
In Cisco IOS, Access Control Lists (ACLs) are used to set up NAT.
Static NAT translates one public IP address to a private IP address.
Sometimes know as a 1-to-1 NAT.
To set this up on a Cisco router
int f0/0 ip nat outside int f0/1 ip nat inside ip nat inside source 10.0.1.0 203.0.113.3
Dynamic NAT makes use of a pool of addresses which are given out as they are required.
For devices like workstations that do not host any servers/services this is ideal as a public IP address does not need to be reserved for a device that may not be needed all the time.
Dynamic NAT makes use of different TCP ports.
The NAT router will keep track of what source ports (as well as source address) and translate them, then when the reply packet is received, it translates back to the source port and source address.
Effectively it allows many different devices on an “inside network” with private IP addresses to share a single external public IP.
This is a little different to straight Dynamic NAT that still requires enough public IPs for the devices you have on your network. e.g. if you had 10 devices, you would need a pool of 10 public IP addresses for your devices.
It's very difficult to explain in words!
Take for example a PC with a browser installed.
The PC is installed on a local private network, so has been assigned a private IP address, 192.168.0.20.
The network's gateway (router) has been set up with NAT Overload using Port Address Translation.
The PC needs to browse an external site, let's say http://18.104.22.168
The PC and its browser will open a TCP connection and decide on a TCP port to use (in reality this is negotiated between the PC's operating system and the installed browser)
So the PC might open a connection using the source address and port 192.168.0.20:49165, to destination 22.214.171.124:80
The router receives this packet and does a NAT translation. It will translate the source IP address and source port to its assigned public address and some TCP port.
So what started as source 192.168.0.20:49165, to destination 126.96.36.199:80 turns into source 203.0.113.1:4096, to destination 188.8.131.52:80, for example.
The router keeps track of this translation in a table.
The public web server on 184.108.40.206 sees this packet, and sends a reply from source 220.127.116.11:80 to destination 203.0.113.1:4096
The router receives this reply packet. It sees the port number in the reply is 4096, and it knows 4096 was assigned to the device on internal private address 192.168.0.20.
So it does a translation of source 18.104.22.168:80 to destination 203.0.113.1:4096, to 22.214.171.124:80 to destination 192.168.0.20:49165.
The PC receives the reply and processes the packet.
There can be many port numbers, as many ports as TCP ports are allowed.
Modern browsers make use of different ports to allow you to have different browser tabs. Each tab is effectively its own TCP source port. This is why you can browse the same site on the same PC and browser, but still browse different pages as each session is separated by different source TCP ports.
Turns out what you thought you knew about VPNs you should probably forget and start from scratch. It's more complex than you thought. :(
A Virtual Private Network is a private network within a public network. That's it.
You may think that a VPN is a “secure network”, and although this is common, strictly speaking a VPN does not automatically mean it is secure.
You may have heard of VPN providers selling VPN as a service. What they're actually selling you is more than literally the VPN. They are selling a secured and anonymised internet service and using VPN technologies to supply this to you.
Your plain old ordinary VPN is literally just a private network over a public network.
VPNs are built using different devices, modes and technologies.
Traditionally to link private networks that are situated in different locations together, one would have to install private links between the sites. Leased line, MPLS, satellite, frame relay. etc This can be expensive especially if there are many sites to be linked together.
However one can do this virtually using a public network like the Internet. You use a tunnelling protocol to encapsulate data from your private networks, send it across the public network and when it reaches the other end of the tunnel it is de-encapsulated. This is your virtual private network.
Normally it is common that you implement security to prevent others from seeing or changing the data as it is transmitted.
sample chapter from CCIE Routing and Switching v5.1 Foundations: Bridging the Gap Between CCNP and CCIE
Tunnel Mode and Transport Mode https://www.ciscopress.com/articles/article.asp?p=25477&seqNum=2 (Same article! https://www.informit.com/articles/article.aspx?p=25477)
IPsec by Oracle
IPsec pentesting https://subscription.packtpub.com/book/networking_and_servers/9781787121829/1/ch01lvl1sec17/pentesting-vpn-s-ike-scan
IOS command hierarchy,
Console, telnet, SSH, VTY lines,
enable password, service password-encryption, enable secret
login banner, exec banner
Cisco Switches contain different types of memory to store configurations
RANDOM ACCESS MEMORY RAM (sometimes called DRAM)
running-configuration and the IOS program loaded from Flash.
This is volatile memory that is lost when the device is powered off, exactly like RAM in a PC.
Stores the IOS software and vlan.dat (VLAN config)
Is sometimes flash chips on the device's motherboard or on removeable memory cards.
NVRAM Non volatile Random Access Memory
startup-configuration. Contents not lost when the device is powered off, but can be re-written to if required.
READ ONLY MEMORY ROM
Stores the bootstrap to make the device look at the Flash memory to load into RAM (like a BIOS on a PC?)
Further reading OCG Vol 1, Chapter 4, Page 100, Storing Switch Configuration Files
A Layer 2 switch will support a single Switched Virtual Interface for the purposes of management. This includes remotely connecting to it via SSH, telnet, or for getting info via SNMP.
Further reading OCG1 Chapter 6
A Layer 3 (multilayer) router will support multiple SVIs and inter-VLAN routing between the SVIs.
Syslog, syslog severity levels 0-7, debug, SIEM, NMS, SNMP, SNMP v2c, SNMP v3, MIB
work in progress!
QoS, dedicated (separate) networks, converged networks, shared bandwidth, quality requirements
latency, jitter, loss, FIFO (First In First Out), congestion
QoS classification, QoS marking, Cos, DSCP, ACL, NBAR, 802.1q
Queuing, CBWFQ (Class Based Weighted Fair Queuing), LLQ (Low Latency Queuing), MQC modular QoS CLI, MQC framework, class maps, policy maps, service policies.
Policing and shaping
Worm and junk traffic mitigation
There's been a lot of buzz around Cloud Computing in recent years. Things like “Cloud is somebody else's computer” and other similar things. Actually cloud has some strict definitions before you can technically call it cloud.
The 5 things that define cloud according to NIST:
Differences between SOAP and REST
Python Netmiko to SSH into Cisco IOS devices
Cisco Live DEVNET-1725: How to Be a Network Engineer in a Programmable Age
Unicast, Broadcast & Multicast https://learningnetwork.cisco.com/thread/66629
Layer 2 and Layer 3 switching
TCP/IP Model (TCP/IP Layers) https://docs.oracle.com/cd/E19683-01/806-4075/ipov-10/index.html
Subnet prefix https://docs.oracle.com/cd/E19109-01/tsolaris8/816-1048/networkconcepts-2/index.html
Switches are layer 2 but ACLs can check the IP address and tcp/udp and ports??? https://community.cisco.com/t5/switching/layer-2-devies-and-acl-s/td-p/1284013
Types of WAN connection - https://www.ictshore.com/free-ccna-course/wan-connections/
CCNA subreddit https://www.reddit.com/r/ccna/
Cisco Adaptive Security Appliance (Cisco ASA) https://www.cisco.com/c/en_uk/products/security/adaptive-security-appliance-asa-software/index.html
Things I've been told to be careful of or to avoid when doing networking.
Apparently there are some security issues with leaving the native VLAN on VLAN 1, so it should be changed to another unused VLAN.
You will have different VLANs set up on your network if you require network data to only go where you want it to go. Layer 2 switches by default are set up to work as one big broadcast domain. This means if you send data through one port on a switch, if the switch does not know the MAC address it is supposed to go to, it will 'broadcast' on all other interfaces. VLANs separate the interfaces into different VLANs so makes transmitting data a little more efficient and a little more secure.
If you have a complex VLAN setup with various VLANs, it can be quite a task to set this up on all your switches. For VLANs to work properly, VLAN config needs to be setup on all the switches in the network expected to trunk traffic for those particular VLANs. This instructs the switches to direct the relevant ethernet frames carrying the dot1q headers to the right places.
To manage VLANs more easily, the VLAN Trunking Protocol (VTP) allows you to manage these various VLANs in 1 place. You can set up a switch as a VTP server, then set up other switches as VTP clients.
The thing to be careful of is introducing an old switch you may have found lying around, plugging it into the network, and the switch happens to be running VTP Server with a higher revision number. VTP will then start using the VLANs set up on that old VTP Server taking priority over your real VTP Server. This could potentially delete all your production VLANs you spent ages carefully setting up and dropping live hosts from the network. THIS IS REALLY BAD.
Probably safer to reset the switch to factory settings.
If you really need the old config on that switch, isolate the switch and copy the runnning-config if you really need it, then reset the switch to factory settings.
DHCP would normally handle issuing IP addresses (and other information like DNS server etc) by a DHCP server answering a request from a DHCP client.
For this to work properly the DHCP server must be the sole server (or maybe from one other backup you've installed) to supply this information from one central place.
If another device hosting a DHCP server is connected to the network, this may intercept the DHCP requests and cause problems with the rest of the network.
Typically this “rogue” DHCP server is from a user bringing their own home broadband router to expand available ports. This router will have a built-in DHCP server into the office and connecting it into the ethernet switch the user is unaware it would cause problems.
To prevent this, port security must be enabled on the switches to reduce the likelihood of problems. DHCP Snooping needs enabling to monitor the DHCP activity to ensure it does not compromise the network.
Video by Network Chuck https://youtu.be/wwwAXlE4OtU
Try putting 070C285F4D06 (link) into google. You'll probably find it is the result of a hash of “cisco” from a Cisco IOS router. The hash algorithm that processes that string has been broken and shouldn't be used to protect data.
Cisco routers and switches that come from factory have no passwords or restrictions in their settings.
Cisco IOS has different levels of access within the interface:
User exec and Privileged exec can be protected by a password that must be entered by the user to gain access to that mode. The password itself is stored in the running-config (and startup-config).
For instance you can use the
“enable password” command to apply a password.
R1(config)#enable password mypasswordpleasedonthack
However with this method the password is stored in running-config in PLAIN TEXT, unencrypted. The
show running-config or
show run command will display the config including the password:
Router(config)#do show running-config Building configuration... Current configuration : 715 bytes ! version 15.1 no service timestamps log datetime msec no service timestamps debug datetime msec no service password-encryption ! hostname Router ! ! ! enable password cisco ! ! ! ! ! ! ip cef no ipv6 cef
IOS does feature a
“service password-encryption” that will hash the passwords stored in running-config so they are no longer human readable. After running service password-encryption:
Router(config)#do show run Building configuration... Current configuration : 721 bytes ! version 15.1 no service timestamps log datetime msec no service timestamps debug datetime msec service password-encryption ! hostname Router ! ! ! enable password 7 0822455D0A16 ! ! ! ! ! ! ip cef no ipv6 cef
The “7” showing in the line “enable password” denotes Type 7 and means the password has been encoded using the
service password-encryption command.
It is best practice to use the
enable secret command to encrypt passwords in running-config so they are not easily cracked.
enable secret will hash the passwords using MD5 and show up as type 5.
Simple Network Management Protocol is a way for devices to send data to each other about network management. It can be used to request statistics or to notify state changes, eg HSRP state change.
SNMP v3 supports encryption and strong authentication (usernames and passwords) so use this if you can.
SNMP v1 and v2c uses Community Strings to protect the data these devices send. These community strings act a lot like passwords and are PLAIN TEXT so unsecured.
Often by default a Read Only (RO) community string will be set to “public” and a Read Write (RW) community string will be set to “private”. These should be changed from their defaults.
If you are not actually using SNMP, best practice is to disable it altogether.
Traditionally network switches (Layer 2 switches) link your subnets together and a network router routes traffic between different subnets and link to WANs. L2 switches work by keeping records of known MAC addresses in tables. When a switch receives a frame, it will check the frame headers to see if the MAC address is known and forward it to its destination interface. It will also flood a frame via a broadcast if it cannot see a known MAC address. It has no awareness of IP addresses.
Routers that receive a packet will check the destination IP address in the packet's headers, then make a decision of where to route it to. Decisions the router may take could involve a static route, route decided by an interior gateway protocol, Access Control List etc.
A newer type of device called Layer 3 Switches (or Multilayer Switches) has become more common.
A Layer 3 switch will perform the functions of a regular Layer 2 switch with the addition of some Layer 3 routing functions that you will find in a router.
It would not be accurate to say a Layer 3 switch is just another type of router as there are some notable differences. It may be true to say a L3 switch is a L2 switch with some enhancements.
You could feasibly take a L3 switch and simply use it exactly like a L2 switch and never use its additional L3 features.
Routers will sometimes have expandable slots to upgrade its capabilities. Additional WAN interface cards (WICs) can be installed so a router can support serial, DSL, ISDN, fibre etc.
Switches have a fixed number of ethernet ports and will unlikely have any expandability.
Routers primarily have make their decisions based on software calculations. Switches have specialised ASICs (application-specific integrated circuits) which make them very quick at making decisions.
Layer 3 switches are more likely to be found in campus distribution layer. Layer 2 switches are more likely to be found in a campus access layer.
Devices equipped with ethernet sockets will have either of 2 different configurations which have differing pinouts.
There are 2 types of cable used to link these devices together. A Straight-through cable and a Crossover cable.
Basic straight through cable concept:
Straight through cables for Ethernet and Fast Ethernet, pins 1 2 3 6 are “straight through”.
Crossover cables the receive and transmit pins are “flipped”, so Pin 1 lines up with Pin 3 and Pin 2 lines up with Pin 6.
| Host to switch or hub|
Router to switch or hub
| Switch to switch
Hub to hub
Hub to switch
Host to host
Router direct to host
Router to router
|IEEE||Standard||Common Name||Medium||Speed||Max Length|
|802.3||10BASE-T||Ethernet||2 pairs copper||10 Mbps||100m|
|802.3u||100BASE-T||Fast Ethernet or FE||2 pairs copper||100 Mbps||100m|
|802.3ab||1000BASE-T||Gigabit Ethernet or GE||4 pairs copper||1000 Mbps (1 Gbps)||100m|
|IEEE||Standard||Common Name||Medium||Speed||Max Length|
|802.3z||1000BASE-LX||Gigabit Ethernet||??||1000 Mbps (1 Gbps)||5000 m|
|802.3?||10GBASE-S||10 Gigabit?||Multimode fibre with LED||10,000 Mbps (10 Gbps)||400m|
|10GBASE-LX4||10 Gigabit?||Multi-mode fibre with LED||10 Gbps||300m|
|10GBASE-LR||10 Gigabit?||Single-mode fibre with Laser||10 Gbps||10km|
|10GBASE-E||10 Gigabit||Single-mode fibre with Laser||10 Gbps||30km|
Cabling Guide for Console and AUX Ports
Can a huge coiled LAN cable have some trouble for transmitting a signal?
New CCNA, New exam goes live on February 24, 2020, Exam code 200-301 CCNA
200-301 CCNA Exam topics
200-301 CCNA Exam topics in PDF
Old CCNA exams for CCENT, ICND1 (100-105), ICND2 (200-105)
ICND - Interconnecting Cisco Network Devices
CCNA Routing and Switching (200-125)
old 200-125 CCNA Routing and Switching practice questions
Ten Reasons You Should Get Cisco CCNA Routing and Switching Certified
“I got CCNA certified almost 15 years ago. And even though I am not a networking guy, I am still benefiting from what I have learned from the “old” CCNA R&S. Having a good understanding of networking, routing protocols, vLANs, WAN technologies, etc will really help you in any IT career path.” https://lazyadmin.nl/it/ccna-200-301/
5 Best Network Simulators for Cisco Exams: CCNA, CCNP, CCIE
What is a Full Stack Network Engineer? https://help.nexgent.com/en/articles/587234-what-is-a-full-stack-network-engineer
Paul Browning HowToNetwork.com
Cisco CCNA Certification: Exam 200-301 2 Volume Set Paperback – 19 Apr 2020 by Todd Lammle (Author)
Companion site http://www.ciscopress.com/title/9780135792735
from Routing TCP/IP, Volume II: CCIE Professional Development, 2nd Edition, author Jeff Doyle covers the basic operation of BGP,
Fake Cisco switch (WS-2960X-48TS-L) analysis by F-Secure