laitimes

Web and performance tuning

                                                    Performance tuning for nginx

1 Modify the Nginx work process corresponding to the number of cores of the CPU This is generally twice the number of cores of your CPU, which is the multiple, and the specific number can be seen in the specific hardware configuration

2 Modify the file descriptor 65536 ulimit -n Permanent effect can be changed /etc/sysctl.conf

3 Modify the Linux kernel's scheduling algorithm epoll

4 The number of modified connections is theoretically the number of worker processes * The number of connections However, the theoretical concurrency value of Nginx is 65536, so ... You get the idea

5 In how much time to reach how much concurrency is cached, otherwise...  You get the idea

Under the premise of this rule of 6 (5), the number of times the number of concurrency is exceeded is cached

7 Check (5) regularly

8 Set up a paginated buffer 

9 Set the time to live This refers to the time when the browser page is opened on the cline side and is automatically disconnected without closing

What else does Nginx have?  

Nginx can do web port forwarding and reverse proxy  

What software can also be used as a directional agent?

Nginx counts as an squid (not as good as Nginx, it can do both positive and repercussive proxies, but the reverse proxy function is not as powerful as others) and a varnish (specifically used for repercussion proxies) also counts as one

What is a repercussion agent and what is a forward agent?

You can think of it this way is that the client accesses the server's content through the Internet, and the service at this time is in the internal network, there is no connection with the external network, so if you want to access the content of the server, you must implement it through proxy software, so how to implement the proxy server has to refer to who is his back-end server, and then the back-end server also needs to know who is his proxy server, all of which are configured like this.

Forward proxy You think you can't go online, the agent helps you go online, and then you come back the data to you

The reverse proxy is the same, but if he doesn't have the data, he will go to the backend server to find the data and then give it to the client, and if there is data, he will return it directly

What is a CDN? A CDN is a content delivery network. Why is it that CDNs can speed up and withstand some of the attacks of DDOS? That's how it works or reverse proxies. There are two roles of reverse proxy, one is to say that the alias of the server content is to impersonate the content that the server forwards to others, and the other is to play a role in security protection. How can it play a role in security protection? That's because he acts as a stand-in for the server, if you attack the server, then the person you are really attacking is the proxy, so it will not be fair to the server, so this plays a role in security protection. 

The role of a reverse proxy is to protect the security of a website

There are two models for the reverse proxy model, which can act as a stand-in for content servers or as a load balancer for a cluster of content servers

What are the advantages of Nginx doing port forwarding compared to other software?

Nginx can support a fuller set of regular expressions and haproxy can, albeit slightly weaker than Nginx, and one is lvs

Similarities and differences between individual software:

In fact, these port forwarding software can also be called LB software or tools, because their meaning is to solve a single point of failure.

Nginx and haproxy work in layer seven, haproxy can also work in layer 4, while LVS only works in layer four, and LVS itself is used for port forwarding.

haproxy makes up for Nginx's session and cookie functions, and the application software that works in the TCP protocol is supported, Nginx only supports email and http, so it is better to filter the URL with Nginx, but to support a more complete TCP application software or use haproxy, because the function of haproxy is not weaker than Nginx in the port forwarding surface and Nginx is comparable, It can even be said that it is better than Nginx, but Nginx is a software that is a little too powerful. Up to now, it seems that the figure of LVS has appeared relatively rarely, so let's talk about what joyful functions LVS brings to us. LVS and fastDFS are developed by our countrymen themselves, and many enterprises are already using them now, the former is called Zhang Wensong (Dr.) and the latter is called Yu Qing (CA).

Strong load resistance, because lvs work mode only distribute requests nothing, if one of your nodes fails, then he still goes back to send requests to the faulty one according to his scheduling algorithm, although the load balancer is the strongest, but does not support regular expressions and dynamic and static separation. So keepalive is ready for LVS, keepalive is a software used to make high availability, and with a health check mechanism, the advantages and disadvantages of Nginx haproxy LVS are attached below

Nginx works at seven levels, but only supports email and http regular expressions as the strongest, no cookie and session features, and simple configuration (low)

haproxy is to make up for the shortcomings of Nginx, but also works in the seventh layer, basically supports all applications, supports session function, supports cookies, has a large load, and supports dynamic and static separation

LVS does not support dynamic and static separation and regular expressions, but the load is the strongest, the configuration is low, and it works at Layer IV, because there is no traffic for forwarding, so the impact on I/O is almost zero.

LVS has several modes of operation

LVS-DR: What does that mean? That is, if the client sends a request, the data will not go directly to the realserver, but to the LVS first, and then the LVS will call a realserver through his so-called DR working mode and scheduling algorithm to directly respond to the packet to the client, without giving the data to lvS. This is where load balancers differ from forward and reverse proxies.

LVS-NAT: that is, network address translation, this is also somewhat similar to the above DR, but there is a little difference in his working mode has changed, what has become of the year, it is like this: when the client sends a request, he will go to a called realserver to respond to the client according to his scheduling algorithm and working mode, and before the response, realserver and LVS Some things must be completed before the "project" of the working mode called NAT can be completed. So what are realservers going to do with LVS, and will there be a lot of things to do? There is no actual principle is also very simple, the same as the ARP protocol of the network layer, but here is the client's IP and port first converted into the REALserver's IP and port, so after doing this thing, of course, you have to open the gateway route, then the gateway route is casually referred to? NO, the gateway route refers to the IP of the LVS, because it (LVS) gives you the response data, so the IP of the specified gateway is it. The work on the realserver has been done, and then the LVS does not have to be configured, but the working mode of LVS is to wait for the realserver to transfer the data back, that is, after transferring to the LVS hand, LVS loves not to give the data directly to the client, but first converts the IP and port requested by the client to VIP (virtual IP), When you start to request data from the client, you don't first change the clientIP and port to the REALserver's IP and port to go with the realserver to go to the data, this time to return the data to the client, but also to go through the same as going to the realserver to do the same, but the action of returning the data is to change the IP and port to the realserver to VIP (virtual IP), I don't know when you explain it like this, this NAT model really wants to be forward and reverse proxy.

LVS-TUN: The way tunnels are. Unlike NAT, he doesn't have to rewrite client's IP and port addresses. How's that? That is, the packet of the client is encapsulated into a package called IPtun, which is equivalent to secondary encapsulation, and then the package after the encapsulation is decided to call the realserver directly to the client through his working mode and scheduling algorithm  

Its detailed workflow should be such a client to send a request, and then the LVS-TUN working mode will send the packet sent by the client through the secondary encapsulation into the IP tun packet, and then through the scheduling algorithm to the realserver node, the realserver node will unpack and send it directly to the client.  That's it

Performance tuning for mysql

1 Disable numa

2 Modify the kernel's scheduling algorithm deadline

3 Modify the startup script

4 File descriptors

What is fastDFS 

The fastDFS architecture has at least three components of clear tracker storage, and the text is a good expression for the technical aspects of it, as long as it can be understood. So sometimes it's not wrong to pursue perfection, but learn where to use it

To fully understand fastDFS, you first have to know that it was designed to be based on the C (client)/S (server) model.

An FTP service that is somewhat similar to this one, FTP is also based on the C/S architecture

Read on