laitimes

【Network Security】20 cyber security related knowledge points, come and learn!

author:Linux O&M base

Up-to-date cyber security course syllabus

If you are interested, please leave a message

【Network Security】20 cyber security related knowledge points, come and learn!

1. What is a SQL injection attack?

overview

When an attacker injects malicious SQL code into an HTTP request, the malicious SQL is constructed together and executed in the database when the server uses parameters to build database SQL commands.

Injection method

The user logs in, enters the username lianggzone, and the password ' or '1'='1 will appear if the parameter construction method is used at this time

select * from user where name = ‘lianggzone’ and password = ‘’ or ‘1’=‘1’           

Regardless of the content of the username and password, the list of users queried out is not empty. How to protect against SQL injection attacks Using a precompiled PrepareStatement is a must, but generally we start with both.

Web-based prevention

  • Validity test.
  • Limit the length of string inputs.

Server-side prevention

  • No need to concatenate SQL strings.
  • Use a precompiled PrepareStatement.
  • Validity test. (Why do the server still need to do validity checks?The first criterion is that the external is untrustworthy to prevent attackers from bypassing web requests)
  • Filter on special characters in the parameters required by SQL. For example, single quotation marks, double quotation marks.

2. What is an XSS attack?

overview

Cross-site scripting attacks are attacks in which attackers manipulate web pages, embed malicious scripts, and control users' browsers to perform malicious operations when they browse web pages.

How to protect against XSS attacks

  • The front-end, the server-side, and the length limit of the string input are required at the same time.
  • The front-end, the server-side, and the HTML escaping process is also required. Escape encoding of special characters such as "<" and ">".

At the heart of XSS prevention is the filtering of the input data.

3. CSRF attacks

overview

Cross-site request forgery refers to an attacker performing illegal operations as a legitimate user through cross-site requests. Think of it this way: an attacker steals your identity and sends malicious requests to a third-party website on your behalf. CRSF can use your identity to send emails, text messages, make transaction transfers, and even steal account information.

How to protect against CSRF attacks

  1. 安全框架,例如 Spring Security。
  2. Token mechanism: If there is no token in the request or the content of the token is incorrect, the request is rejected as a CSRF attack.
  3. Captcha: Captcha are often a good source of containment against CSRF attacks, but in many cases, CAPTCHAs are intended as an adjunct rather than the primary solution for user experience reasons.
  4. Referer identification: There is a field in the HTTP header, Referer, which records the address from which the HTTP request originated. If the referer is another website, there is a possibility of a CSRF attack, and the request is denied. However, not all servers can get the Referer. Many users restrict the sending of Referers due to privacy concerns. In some cases, browsers also don't send Referers, such as HTTPS jumping to HTTP.
  • Verify the source address of the request;
  • Add captcha to key operations;
  • Add a token to the request address and verify it.

4. File upload vulnerability

overview

The file upload vulnerability refers to the ability to execute server commands through the user uploading an executable script file.

Many third-party frameworks and services have been exposed to file upload vulnerabilities, such as Struts2 and rich text editors, which can be uploaded by attackers, and the server may be hacked. How to prevent file upload vulnerabilities

How to protect against file upload attacks

  • Determine the file type. When judging the file type, you can use a combination of MIME type, suffix checking, etc. For an uploaded file, the file type cannot be determined simply by the suffix name, because an attacker can change the suffix name of the executable file to an image or other suffix type to induce users to execute it.
  • Verify the whitelist of uploaded file types, and only reliable types are allowed.
  • The uploaded file needs to be renamed, making it impossible for an attacker to guess the access path of the uploaded file, which will greatly increase the cost of the attack, and at the same time, the renaming of files such as shell.php.rar.ara will not be able to successfully carry out the attack.
  • Limit the size of the uploaded file.
  • Set the domain name of the file server separately.

5. DDos attacks

overview

The client sends a request link packet to the server, the server sends an acknowledgment packet to the client, the client does not send an acknowledgment packet to the server, and the server waits for an acknowledgment from the client unless TCP is not used

【Network Security】20 cyber security related knowledge points, come and learn!

DDos Prevention:

  • Limit the number of SYN semilinks that can be opened at the same time
  • Reduce the time out time of SYN semi-links
  • Turn off unnecessary services

Distribution of important protocols

6. ARP Protocol

overview

Address Resolution Protocol (ARP) is a TCP/IP protocol that obtains a physical address based on an IP address.

  1. The Ethernet data frame that sends the ARP request is broadcast to each host on the Ethernet, and the IP address of the destination host is included in the ARP request frame.
  2. After receiving the ARP request, the destination host sends an ARP reply containing the MAC address of the destination host.

How the ARP protocol works

  1. Each host establishes an ARP list in its own ARP buffer to represent the correspondence between IP addresses and MAC addresses.
  2. When a host (network interface) joins the network (or just a MAC address change, interface restart, etc.), it sends a free ARP packet and broadcasts the mapping between its IP address and MAC address to other hosts.
  3. When a host on the network receives a free ARP packet, it updates its own ARP buffer. Update the new mapping to your own ARP table.
  4. When a host needs to send packets, it first checks whether there is a MAC address of the destination host corresponding to the IP address in the ARP list, sends the data directly, and if not, sends an ARP packet to all hosts in the CIDR block, including the IP address of the source host, the MAC address of the source host, and the IP address of the destination host.

When the ARP packet is received by all hosts on this network:

  • First check if the IP address in the packet is your own, and if not, ignore the packet.
  • If so, the IP and MAC address of the source host are first taken from the packet and written to the ARP list, and if it already exists, overwritten.
  • It then writes its MAC address into the ARP response packet and tells the source host that it is the MAC address it is looking for.

After the source host receives the ARP response packet. Write the IP and MAC addresses of the destination host to the ARP list and use this information to send data. If the source host has not received an ARP response packet, the ARP query has failed. ARP caching, or ARP tables, is the key to the efficient operation of the ARP address resolution protocol

How to Prevent ARP Attacks?

1. MAC address binding

The IP address of each computer in the network corresponds to the hardware address one-to-one and cannot be changed.

2. Use static ARP cache

Manually update the records in the cache so that ARP spoofing is impossible.

3. Use an ARP server

The server looks up its own ARP translation table in response to ARP broadcasts from other machines. Make sure that this ARP server is not hacked.

4. Use ARP spoofing protection software:

Such as ARP firewalls

5. Detect and isolate hosts that are being spoofed by ARP in time

6. Use the latest version of DNS server software and install patches in time

7. Disable the recursion function of the DNS server

The DNS server uses the information in the cache to answer the query request, or the DNS server obtains the query information by querying other servers and sends it to the client, which is called recursive query, which can easily lead to DNS spoofing.

8. Limit the range of regional transmission

Limit the addresses to which the nameservers can respond, Limit the addresses to which the nameservers can respond, Limit the addresses to which requests are made.

9. Restrict dynamic updates

10. Adopt a hierarchical DNS architecture

11. Check the source code

If a URL redirect occurs, you're bound to find out. However, checking the source code of every page a user connects to is an impractical idea for the average user.

12. Ensure that the app is effective and can track users appropriately

Whether you're using cookies or session IDs, you should make sure to be as long and random as possible.

7. How RARP works

Reverse Address Translation Protocol, Network Layer Protocol, RARP works the opposite of ARP. RARP enables hosts that only know their hardware addresses to know their IP addresses. The RARP issues a physical address to be interpreted in reverse and wants to return its IP address, which includes the IP address emitted by the RARP server that can provide the required information.

Principle:

  1. Each device on the network will have a unique hardware address, usually a MAC address assigned by the device manufacturer. The host reads the MAC address from the NIC and then sends a RARP requested broadcast packet on the network, requesting the RARP server to reply to the host's IP address.
  2. The RARP server receives the RARP request packet, assigns it an IP address, and sends the RARP response to the host.
  3. PC1 receives the RARP response and communicates using the resulting IP address.

8. How DNS works

Translates the host domain name to an IP address, which is an application-layer protocol and uses UDP for transmission.

Process:

Summary: Browser Cache, System Cache, Router Cache, IPS Server Cache, Root Nameserver Cache, Top-Level Nameserver Cache, Primary Nameserver Cache.

There are two ways to query

  1. Queries from hosts to local nameservers are generally recursive.
  2. An iterative query of a query from the local nameserver to the root nameserver.
  • When the user enters the domain name, the browser first checks whether there is an IP address mapped to the domain name in its cache, and if so, the resolution is completed.
  • If it does not hit, check whether there is a parsed result in the operating system cache (such as Windows hosts), and the parsing is completed.
  • If there is no hit, the local name server resolution (LDNS) is requested.
  • If LDNS does not hit, it will go directly to the root domain name server to request resolution. The root nameserver returns a primary nameserver address to LDNS.
  • In this case, LDNS sends a request to the gTLD (generic top-level domain) returned in the previous step, and the gTLD that accepts the request looks up and returns the address of the Name Server corresponding to the domain name
  • The Name Server finds the target IP based on the mapping table and returns it to LDNS
  • LDNS caches the domain name and the corresponding IP, returns the resolution result to the user, and the user caches it to the local system cache according to the TTL value, and the domain name resolution process ends here
【Network Security】20 cyber security related knowledge points, come and learn!

DNS attacks

【Network Security】20 cyber security related knowledge points, come and learn!

DNS attack prevention

  1. Security updates: Keep up to date with all relevant software and operating system patches, and ensure that all devices have the latest version of security software.
  2. Configure the correct permissions: Give only necessary users the appropriate permissions and prohibit the use of weak passwords to log in.
  3. Encrypted transmission: Encryption technology (e.g., SSL/TLS) is used to protect sensitive data from interference or leakage during transmission.
  4. Multi-level authentication: Enhance account security through multi-factor authentication before logging into the system
  5. Fault-tolerant mechanism: Establish cold and hot standby for core services, and quickly switch to the backup system in the event of a failure, reducing the risk of service interruption.

9. How RIP works

RIP Dynamic Routing Protocol (Network Layer Protocol)

RIP is a protocol based on a Distance-Vector algorithm that uses hop count as a metric to measure the distance of a route to the destination network. RIP exchanges routing information through UDP packets and uses port number 520.

How it works:

THE RIP ROUTING PROTOCOL USES TWO PACKETS, "UNPDATES" AND "REQUESTS," TO TRANSMIT INFORMATION. Every 30 seconds, each router with RIP protocol features the UDP520 port to broadcast updates to the machines directly connected to it. And (use the "number of legs" (i.e., "hops") as the scale of network distance. Each router adds an internal distance to each path when it sends routing information to its neighbors

Convergence mechanism of the router:

The problem with any distance vector routing protocol (such as RIP) is that the router is not aware of the global picture of the network, and the router must rely on neighboring routers to get the reachable information of the network. Due to the slow propagation of routing update information on the network, the distance vector routing algorithm has a slow convergence problem, which will lead to inconsistencies.

Problems with RIP's Less Routing Convergence Mechanism:

  1. Count-to-infinity mechanism: The RIP protocol allows a maximum hop count of 15. Destinations greater than 15 are considered unreachable. When the number of hops of a path exceeds 15, the path is deleted from the routing table.
  2. Split horizontally: The router does not backhaul the path in the direction it came. When a router interface is enabled, the router records which interface the path came from and does not send the path back to the interface.
  3. Split-horizon method that breaks reversal: Ignores the path taken from one router during the update process and then passed back to that router
  4. Hold timer method: Prevents the router from accepting new routing information within a certain period of time (typically 180 seconds) after the path is deleted from the routing table. Ensure that each router receives the path unreachable message
  5. Trigger update method: When the hop count of a path changes, the router immediately sends an update message, regardless of whether the router reaches the regular information update time.

Features of RIP:

  1. Since 15 hops is the maximum, RIP can only be applied to small-scale networks;
  2. slow convergence rate;
  3. A route selected based on the number of hops is not necessarily the optimal route.

OSPF protocol?how OSPF works

OSPF (Open Shortest Pass First) is the most commonly used internal network management protocol and a link-state protocol. (Network Layer Protocol,)

Principle:

OSPF multicast sends Hello packets to all OSPF-enabled interfaces to determine whether there are OSPF neighbors, and if so, establishes OSPF neighbor relationships to form neighbor tables, and then sends LSA (Link Status Advertisement) to each other to advertise routes to form LSDB (Link State Database). Then, through the SPF algorithm, the optimal path (the lowest cost) is calculated and put into the routing table

10. Differences between TCP and UDP

  1. TCP provides reliable services for connections (e.g., dial-up to establish a connection before making a call); UDP is connectionless, i.e. no connection is required before sending data; UDP delivers on a best-effort basis, i.e. reliable delivery is not guaranteed. (Since UDP does not need to establish a connection, UDP does not introduce a delay in establishing a connection, TCP needs to maintain the connection state in the end system, such as accept and send cache, congestion control, sequence number and acknowledgment number parameters, etc., so TCP will be slower than UDP)
  2. UDP has better real-time performance and higher work efficiency than TCP, and is suitable for communication or broadcast communication with high high-speed transmission and real-time performance.
  3. Each TCP connection can only be one-to-one; UDP supports one-to-one, one-to-many, many-to-one, and many-to-many interactive communication
  4. The UDP packet header overhead is small, and the TCP header overhead is 20 bytes. The header of UDP has a small overhead, only 8 bytes.
  5. TCP is oriented towards byte streams, which is actually TCP treats data as a series of unstructured byte streams; UDP is message-oriented (one complete message is delivered at a time, the message is indivisible, and the message is the smallest unit of UDP datagram processing).
  6. UDP is suitable for network applications such as DNS, SNMP, etc., that transmit small amounts of data at once

What is a three-way handshake and four waves?tcp Why a three-way handshake?

To prevent an error from occurring an error in the connection request segment, the invalid connection request segment is suddenly sent to the server

First handshake: When the connection is established, the client sends a syn packet (syn=j) to the server, and enters the SYN_SEND state, waiting for the server to acknowledge.

The second handshake: When the server receives the SYN packet, it must confirm the customer's SYN (ack=j+1) and send a SYN packet (syn=k), that is, the SYN+ACK packet, and the server enters the SYN_RECV state.

Third handshake: The client receives the SYN+ACK packet from the server and sends the acknowledgment packet ACK (ack=k+1) to the server.

The three-way handshake is completed, and the client and server begin to transmit data

【Network Security】20 cyber security related knowledge points, come and learn!

Four waves of the hand

The client first sends the FIN and enters the FIN_WAIT1 state, which is used to close the data transmission from the client to the server, and the server receives the FIN, sends the ACK, and enters the CLOSE_WAIT state, and the client receives the ACK and enters the FIN_WAIT2 state

The server sends the FIN and enters the LAST_ACK state, which is used to turn off the data transfer from the server to the clientThe client receives the FIN, sends the ACK, and enters the TIME_WAIT state, and the server receives the ACK and enters the CLOSE state (wait for 2MSL time, about 4 minutes). The main thing is to prevent the last ACK from being lost. )

The first wave: the active closing party sends a FIN to close the data transmission from the active party to the passive closing party, that is, the active closing party tells the passive closing party that I will not send you data anymore (of course, the data sent before the FIN packet, if the corresponding ACK acknowledgment message is not received, the active closing party will still resend the data), but at this time, the active closing party can still accept the data.

Second wave: After receiving the FIN packet, the passive closing party sends an ACK to the other party, confirming that the sequence number is received +1 (the same as SYN, one FIN occupies one sequence number).

The third wave: The passive closing party sends a FIN to turn off the data transmission from the passive closing party to the active closing party, that is, telling the active closing party that my data has also been sent and will not send you any more data.

Fourth wave: After receiving the FIN, the active closing party sends an ACK to the passive closing party to confirm that the sequence number is received +1, so far, the four waves are completed.

【Network Security】20 cyber security related knowledge points, come and learn!

11. The difference between GET and POST

get is to get the data, and post is to modify the data

get puts the requested data on the url, splits the URL and transmits the data, and the parameters are connected with &, so get is not very secure. While the post puts the data in the HTTP package (requrest body) get the maximum amount of data submitted is 2k (the limit actually depends on the browser), the post theoretically has no limit.

GET generates a TCP packet, the browser will send the http header and data together, and the server responds with 200 (return data); POST generates two TCP packets, the browser sends a header first, the server responds with 100 continue, and the browser sends data again, and the server responds with 200 ok (return data).

GET requests are actively cached by the browser, while POST is not, unless set manually.

GET is idempotent, whereas POST is not idempotent

The difference between cookies and sessions

Both cookies and sessions are solutions for maintaining state between the client and the server

  1. The storage location is different, cookie: stored on the client, session: stored on the server. The data stored in the session is relatively secure
  2. The types of data stored are different
  3. Both are key-value structures, but there is a difference for the type of value: the value can only be a string, and the session:value is an Object

3. Different limits on the size of stored data

cookie: the size is limited by the browser, many are 4K in size, session: theoretically limited by the current memory,

4. Life cycle control

The life cycle of a cookie dies when the browser is closed

  1. The life cycle of a cookie is cumulative, starting from the time it is created, and after 20 minutes, the cookie life cycle ends.
  2. The lifecycle of a session is spaced, and if it is 20 minutes from the time of creation, and no session is accessed, the session lifecycle is destroyed

12. How the session works

The working principle of the session is that after the client logs in, the server will create the corresponding session, and after the session is created, the session ID will be sent to the client, and the client will store it in the browser. In this way, every time the client accesses the server, it will carry the sessionid, and after the server gets the sessionid, it will find the corresponding session in the memory so that it can work normally.

13. A complete HTTP request process

  1. Domain name resolution
  2. Initiates a 3-way handshake for TCP
  3. Initiate an http request after a TCP connection is established
  4. The server responds to the HTTP request, and the browser gets the html code
  5. The browser parses the HTML code and requests resources in the HTML code (such as JS, CSS, images, etc.) for the browser to render the page to the user.

14. The difference between HTTPS and HTTP

  1. The data transmitted by the HTTP protocol is unencrypted, that is, plaintext, so it is very insecure to use the HTTP protocol to transmit private information, and the HTTPS protocol is a network protocol built by the SSL+HTTP protocol that can be encrypted for transmission and identity authentication, which is more secure than the http protocol.
  2. The HTTPS protocol requires a certificate to apply for a certificate from the CA, and there are generally fewer free certificates, so it requires a certain fee.
  3. HTTP and HTTPS use completely different connections and different ports, with the former being 80 and the latter 443. https://www.cnblogs.com/mywe/p/5407468.html

15. The seven-layer model of OSI

  1. Physical layer: The transmission medium is used to provide a physical connection to the data link layer to achieve transparent transmission of bitstreams.
  2. Data link layer: Receives data in the form of a bit stream from the physical layer, encapsulates it into a frame, and transmits it to the upper layer
  3. Network layer: Translates the network address into the corresponding physical address and selects the most appropriate path for the packet through the communication subnet through the routing algorithm.
  4. Transport layer: Provides reliable and transparent data transmission between the source and destination
  5. Session layer: Responsible for establishing, maintaining, and terminating communication between two nodes in the network
  6. Presentation layer: deals with the presentation of user information, data encoding, compression and decompression, and data encryption and decryption
  7. Application layer: provides network communication services for users' application processes
【Network Security】20 cyber security related knowledge points, come and learn!

16. The difference between HTTP long connections and short connections

Short-lived connections are used by default in HTTP/1.0. That is, every time the client and server perform an HTTP operation, a connection is established, and the connection is interrupted at the end of the task. As of HTTP/1.1, persistent connections are used by default to maintain the connection feature. What is TCP packet sticking/unpacking?Cause?Solution A complete service may be split into multiple packets by TCP for sending, or multiple small packets may be encapsulated into one large packet for sending, which is the problem of TCP unpacket unpacking and packet sticking.

Cause:

  1. The size of the bytes in which the application writes data is larger than the size of the socket send buffer.
  2. Perform MSS-sized TCP fragmentation. (MSS=TCP packet segment length - TCP header length) 3. The payload of Ethernet is greater than that of MTU for IP fragmentation. (MTU refers to the maximum packet size that can pass through a certain layer of a communication protocol.) )

Solution:

  1. The message is fixed-length.
  2. Add special characters such as carriage return or spaces at the end of the packet to split it
  3. Divide the message into headers and tails.
  4. Use other complex protocols such as RTMP.

17. How does TCP ensure reliable transmission?

  1. Three handshakes.
  2. Truncate the data to a reasonable length. Application data is split into chunks (byte-numbered, properly sharded) that TCP deems most appropriate to send
  3. Resend timeout. When TCP sends out a segment, it starts a timer and resends it if it doesn't receive an acknowledgment in time
  4. Acknowledgment Response: Acknowledgement response is given for the received request
  5. Checksum: If an error is detected in the packet, the packet segment is discarded and no response is given
  6. Serial number: Out-of-order data is reordered before handing it off to the application layer
  7. Discard duplicate data: For duplicate data, it is possible to discard duplicate data
  8. Flow control. Each side of a TCP connection has a fixed amount of buffer space. The receiver side of TCP allows only the other end to send data that can be accepted by the receiver buffer. This will prevent the faster host from causing the buffer overflow of the slower host. Congestion control. When the network is congested, reduce the sending of data.
  9. checksum
  10. Serial number
  11. Acknowledgment of response
  12. Timeout retransmission
  13. Connection management
  14. Flow control
  15. Congestion control

18. Common status codes

  • 200: OK Client request succeeded 403 Forbidden // The server received the request, but refused to provide the service
  • 404: Not Found //The requested resource does not exist, eg: The wrong URL was entered
  • 500: Internal Server Error //The difference between the URI and the URL of the unexpected error occurred on the server URI, a unified resource identifier used to uniquely identify a resource. URLs can be used to identify a resource and also indicate how to locate the resource.

19. What is SSL?

SSL stands for Secure Sockets Layer. It is a protocol used to encrypt and authenticate data sent between an application, such as a browser, and a web server. Authentication, the encryption mechanism that encrypts HTTPS is a hybrid encryption mechanism that uses both shared key encryption and public key encryption.

SSL/TLS protocol role

  1. The application layer protocol encryption and decryption that authenticate users and services, encrypt data, and maintain the integrity of the data require two different keys, so it is called asymmetric encryption.
  2. Both encryption and decryption use symmetric encryption with the same key. The advantage is that encryption and decryption efficiency are usually relatively high, HTTPS is based on asymmetric encryption, and the public key is public
  1. The client initiates an SSL connection request to the server.
  2. The server sends the public key to the client, and the server holds the unique private key
  3. The client encrypts the symmetric key of the communication between the two parties with the public key and sends it to the server
  4. The server uses its own unique private key to decrypt the symmetric key sent by the client.
  5. For data transmission, both the server and the client use the same symmetric key in the public to encrypt and decrypt the data.

Security can be guaranteed in the process of data transmission and reception, even if a third party obtains the data packet, it cannot be encrypted, decrypted and tampered with.

Digital signatures

Because digital signatures and digests are very critical weapons for certificate anti-counterfeiting. "Summary" is a fixed-length string of transmitted content calculated by a hash algorithm. The digest is then encrypted with the CA's private key, and the result is a "digital signature"

The basic idea of the SSL/TLS protocol is to use public key cryptography, that is, the client first asks the server for the public key, and then uses the public key to encrypt the information, and the server receives the ciphertext and decrypts it with its own private key.

How do I ensure that the public key is not tampered with?

Place the public key in the digital certificate. As long as the certificate is trusted, the public key is trusted.

How to reduce the amount of time consumed by public key cryptography is too computational?

For each session, the client and server generate a "session key" that is used to encrypt the message. Since the "conversation key" is symmetric encryption, the operation speed is very fast, and the server public key is only used to encrypt the "conversation key" itself, which reduces the time consumed by the encryption operation.

(1) The client asks for and verifies the public key from the server.

(2) The two parties negotiate to generate a "conversation key".

(3) The two parties use the "conversation key" for encrypted communication. The first two steps of the process are also known as the "handshake phase".

SSL working process

A: Client, B: Server

  1. Negotiate the encryption algorithm: A sends the SSL version number and optional encryption algorithm to B, and B selects the algorithm he supports and tells A
  2. Server authentication: B sends a digital certificate containing the public key to A, and A validates the certificate with the public key publicly available to the CA
  3. Session key calculation: A generates a random secret number, encrypts it with B's public key and sends it to B, and B generates a shared symmetric session key according to the negotiated algorithm and sends it to A.
  4. Secure data transfer: Both parties encrypt and decrypt the data transmitted between them with a session key and verify its integrity
【Network Security】20 cyber security related knowledge points, come and learn!
【Network Security】20 cyber security related knowledge points, come and learn!

20. The application layer protocol corresponding to TCP

FTP: Defines a file transfer protocol that uses port 21.

Telnet: It is a port for remote login, 23 ports

SMTP: A simple messaging protocol is defined, and the server is open to port 25.

POP3: It is the counterpart of SMTP, and POP3 is used to receive emails.

HTTP

The application-layer protocol corresponding to UDP

DNS: Used for domain name resolution services, port 53 is used

SNMP: Simple network management protocol that uses port 161

TFTP(Trival File Transfer Protocal):简单文件传输协议,69

Read on