laitimes

Haven't used microservices before? Don't panic, the beggar's version of the architecture diagram allows you to easily handle the interviewer

author:Nezha programming

Hello everyone, I'm Nezha.

Many people say that now is the era of cloud native and large models, and microservices are outdated, but the reality is that many people have not used microservices in actual development for many years, let alone build microservice frameworks and technology selection.

During the interview, I will ask, what should I do?

Today I will share a beggar architecture diagram of microservices, so that you can break with the interviewer~

Without further ado, let's go straight to the picture.

Haven't used microservices before? Don't panic, the beggar's version of the architecture diagram allows you to easily handle the interviewer

It can be seen that the Spring Cloud microservice architecture is composed of multiple components together, and the interaction process of each component is as follows.

  1. The browser obtains the network location information of the available service instance by querying the DNS server, so as to realize the automatic discovery and dynamic update of the service.
  2. Obtain static resources through CDN to improve access speed and solve the problem of slow cross-region request speed.
  3. Through the LVS load balancer, load balancing and network protocol are realized;
  4. Through the Nginx reverse proxy server, the request is forwarded to the gateway for route forwarding and security verification.
  5. Visit the registry and configuration center Nacos to obtain back-end services and configuration items.
  6. Current throttling via Sentinel;
  7. Cache service, session management, and distributed lock control are carried out through Redis.
  8. You can use Elasticsearch to perform full-text search, store logs, and work with Kibana to visually analyze data in Elasticsearch in real time.

1. Domain Name System (DNS).

In microservices, DNS is primarily used for service discovery and load balancing.

  1. When each microservice instance is started, it registers its own IP address and port number and other information to the DNS server, and the browser obtains the network location information of the available service instance by querying the DNS server, so as to realize the automatic discovery and dynamic update of the service.
  2. The DNS server can distribute requests to different load balancers based on certain policies, such as polling and randomization, to improve the system's concurrency processing capabilities and fault tolerance.

2. LVS (Linux Virtual Server), Linux virtual server

LVS is an open-source load balancing software based on the Linux operating system. It implements the function of load balancing in the Linux kernel, and implements the load balancing strategy through user processes running in user space.

  1. LVS supports a variety of load balancing algorithms, such as round robin, random, weighted round robin, and weighted random.
  2. LVS supports a variety of network protocols, such as TCP, HTTP, HTTPS, to meet the needs of different applications.
  3. LVS is highly available and scalable. It supports master-slave backup and redundancy configurations, and when the primary server fails, the backup server can automatically take over the load, ensuring service continuity. In addition, LVS allows you to dynamically add and remove server nodes, making it easier for administrators to scale out and scale out.

3. CDN static resources

CDN static resource images, videos, JavaScript files, CSS files, and static HTML files. These static resources are characterized by a large number of read requests, high requirements for access speed, and high bandwidth. If it is not handled properly, the access speed may be slow and the bandwidth may be full, which in turn will affect the processing of dynamic requests.

The role of a CDN is to distribute these static resources to servers in multiple geographically located data centers. Allows users to select the nearest access to improve the access speed and solve the problem of slow cross-region requests.

4. Nginx Reverse Proxy Server

1. The main role of Nginx is reflected in the following aspects:

  1. Reverse proxy, Nginx can act as a reverse proxy server, receive requests from clients, and then forward the requests to the microservice instance on the backend.
  2. Load balancing: Nginx can distribute requests to different microservice instances according to the configuration to achieve load balancing.
  3. Service routing: Nginx can route requests to different microservices based on different path rules.
  4. Static resource services: Nginx can provide static resource services, such as images, videos, JavaScript files, CSS files, HTML static files, etc., to reduce the pressure on back-end services and improve the response speed and performance of the system.

2. How to choose between Nginx static resource service and CDN static resource service?

When choosing between Nginx Static Resource Service and CDN Static Resource Service, you can make trade-offs and choices based on the following factors:

  1. Performance and speed: CDN static resource services typically have a wider range of distributed nodes and caching mechanisms, which can respond to user requests faster and reduce transmission distances and network congestion. If the loading speed and performance of static resources are the primary considerations, a CDN may be a better choice.
  2. Control and customization: Nginx static resource service provides greater flexibility and control, which can be customized and configured according to specific needs. If you need more granular control and customization capabilities, or if you need to deploy in a specific network environment, Nginx may be a better fit.
  3. Cost and budget: CDN static resource service usually requires additional fees, while Nginx static resource service can be built and deployed by yourself, and the cost is relatively low. When considering options, you need to consider a combination of cost and budget factors.
  4. Content distribution and global coverage: If static resources need to be distributed to users around the world, CDN distributed nodes of static resource services can better meet this demand and provide broader content distribution and global coverage.

Whether you choose Nginx or CDN depends on your specific needs and scenarios. If you are looking for better performance and global coverage, you can choose CDN static resource service. If you need more control and customization capabilities and do not have particularly high performance requirements, you can choose Nginx Static Resource Service.

5. Gateway

In a microservice architecture, Gateway functions as follows:

  1. Unified entrance: As the unified entrance of the entire microservice architecture, all requests will pass through the gateway, which can hide the details of internal microservices and reduce the probability of backend services being attacked.
  2. Routing and forwarding: The gateway routes requests to the corresponding microservice instances based on the path and parameters of the request. In this way, services can be decoupled, so that each microservice can be developed, tested, and deployed independently.
  3. Security and authentication: Gateways often integrate authentication and permission validation to ensure that only authenticated requests can access microservices. The gateway also has the functions of anti-crawler, current limiting, and fuse.
  4. Protocol conversion: Since different technologies and protocols can be used in the microservice architecture, Gateway can be used as a protocol conversion center to achieve conversion and compatibility between different protocols.
  5. Logging and monitoring: Gateway can record all request and response logs to provide data support for subsequent troubleshooting, performance analysis, and security audits. The Gateway also integrates monitoring and alarm functions: real-time feedback on the operating status of the system;
  6. Service aggregation: In some scenarios, Gateway can aggregate data from multiple microservices and return it to the client at one time, reducing the number of interactions between the client and the microservice and improving system performance.

6. Registration Center Nacos

In the microservice architecture, the role of Nacos is mainly reflected in the registration center, configuration center, service health check, etc.

  1. Registry: Nacos supports DNS and RPC-based service discovery, microservices can register interface services to Nacos, and clients can find and call these service instances through Nacos.
  2. Configuration center: Nacos provides a dynamic configuration service, which can dynamically modify the configuration items in the configuration center, and complete the configuration modification and release without restarting the background service, which improves the flexibility and maintainability of the system.
  3. Service health check: Nacos provides a series of service governance functions, such as service health check, load balancing, and fault tolerance processing. Service health check prevents requests from being sent to unhealthy hosts or service instances, ensuring the stability and reliability of the service. Server Load Balancer can distribute requests to different service instances based on certain policies to improve the system's concurrent processing capacity and performance.

7. Redis caching

1. In the microservice architecture, the role of Redis is mainly reflected in the following aspects:

  1. Caching service: Redis can be used as a cache server to store commonly used data in memory, improve data access speed and response time, reduce database access pressure, and accelerate background data query.
  2. Session management: Redis can store session information and implement distributed session management. This enables session information to be shared and accessed across multiple services, providing a consistent user experience;
  3. Distributed locks: Redis provides a distributed locking mechanism to ensure the rationality and orderliness of multiple nodes in a microservice to access shared resources, and avoid race conditions and resource conflicts.
  4. Message queue: Redis supports the publish-subscribe mode and message queue mode, which can be used as message middleware. Microservices can communicate asynchronously with each other through Redis to achieve decoupling and high availability.

2. Competitive conditions

Race conditions can cause various problems with the execution results, such as computer crashes, prompts for illegal actions and ends the program, incorrectly reads old data, or incorrectly writes new data. This is prevented by serial memory and storage access, and when a read and write command occurs at the same time, the read operation is performed first by default.

Race conditions can also occur in the network, when two users simultaneously attempt to access the same available channel, and no computer can be informed that the channel is occupied before the system agrees to access. Statistically speaking, this usually happens in networks with considerable delay times, such as the use of geostationary satellites.

In order to prevent this kind of race condition from occurring, it is necessary to develop a priority list, for example, a user's username can be ranked higher in the alphabet. Hackers can exploit the weakness of race conditions to gain illegal access to the network.

3. How do I manage Redis sessions?

General steps for Redis session management:

  1. Session creation: When a user accesses an app for the first time, a new session can be created in Redis, which can be a data structure with a unique identifier, such as a hash table or string.
  2. Session information storage: You can associate session information with a session ID and store it in ApsaraDB for Redis, which can include user identity, login status, and permissions.
  3. Session Expiration Time Setting: Set an expiration time for the session to ensure that the session automatically expires after a certain amount of time. Redis provides a mechanism to set the expiration time of key-value pairs, and you can use the EXPIRE command to set the expiration time for a session.
  4. Session access and update: Obtain the corresponding session information through the session ID each time the user accesses the app, and verify and update it. If the session has expired, the user can be asked to log back in;
  5. Session Termination: If a user voluntarily logs out or the session expires, you need to delete the session information stored in Redis by deleting the session information stored in Redis.

8. Elasticsearch full-text search engine

In the microservice architecture, the application of the Elasticsearch full-text search engine is mainly reflected in the following aspects:

  1. Full-text search engine: ES is a distributed full-text search engine, which can conduct real-time full-text search on massive data and return keyword-related results.
  2. Distributed storage: Elasticsearch provides a distributed real-time file storage function, and each field can be indexed and searched, which makes data storage and query in Elasticsearch very efficient.
  3. Data analysis: Cooperate with Kibana to perform real-time visual analysis of data in Elasticsearch to provide data support for data decision-making.
  4. Logs and monitoring: Elasticsearch can be used as a storage and analysis platform for logs and monitoring data. By collecting log information from the system and storing it in Elasticsearch for real-time log query, analysis, alarming, and display.
  5. Scalability: Elasticsearch is highly scalable, scaling horizontally to hundreds of servers and processing petabytes of data, enabling Elasticsearch to cope with the challenges of massive amounts of data.

9. Do you feel that Redis and Elasticsearch are similar? Differences between Redis and Elasticsearch in microservices

  1. Data storage and query mode: Redis is a key-value pair-based storage system that provides high-performance read and write operations and is suitable for scenarios with simple storage structures and query conditions. Elasticsearch is a distributed search and analytics engine that is suitable for complex scenarios such as full-text search and data analysis, and can handle more complex query requirements.
  2. Data structure and processing capabilities: Redis supports rich data structures, such as strings, hashes, lists, and collections, and provides atomic operations, which are suitable for caching, message queues, counters, and other functions. Elasticsearch, on the other hand, is a data structure based on inverted indexes, providing powerful search and analysis capabilities. However, compared with Redis, Elasticsearch has lower write efficiency.
  3. Real-time and consistency: Redis provides high real-time performance, and Redis stores data in memory and can quickly perform read and write operations. Elasticsearch, on the other hand, is a near-real-time search platform, which is not as real-time as Redis.
  4. Scalability: Redis is scaled by adding Redis instances, which may require data sharding for very large datasets. Elasticsearch, on the other hand, has the ability to scale horizontally and can increase the processing capacity of the system by adding more nodes, which is suitable for scenarios with large amounts of data.
Haven't used microservices before? Don't panic, the beggar's version of the architecture diagram allows you to easily handle the interviewer

Read on