前言
tomcat從架構上看,包含Service,Engine,Host,Context,Wrapper。那麼,當使用者發起一個請求時,tomcat是如何将url映射到具體的Wrapper上的呢?
Mapper元件機制
Tomcat 設計了 Mapper(映射)元件 完成 url和Host、Context、Wrapper等元件容器的映射。
Mapper元件的核心功能是提供請求路徑的路由映射,根據某個請求路徑通過計算得到相應的Servlet(Wrapper)。
與url到Wrapper映射相關的類位于org.apache.catalina.mapper包下,包含四個類:
- Mapper:映射關系最核心的、最重要的類。完成url與Host,Context,Wrapper映射關系的初始化、變更、存儲及映射
- MapperListener:實作了ContainerListener與 LifecycleListener接口,監聽tomcat元件的變化,當有Host,Context及Wrapper變更時,調用Mapper相關方法,增加或者删除Host,Context,Wrapper等。
- MappingData:url映射後的資料,表示一個url具體映射到哪個host,哪個context,哪個wrapper上。
- WrapperMappingInfo:表示一個Wrapper的資訊,是一個普通的類,不太重要。
Mapper主要功能是完成url到Wrapper的映射,有三個主要的功能
- 1、映射關系存儲:存儲所有的Host,context及Wrapper的對應關系;
- 2、映射關系初始化及變更:當新增一個元件或者移除一個元件時,mapper如何維護url到Wrapper的映射關系;
- 3、映射關系使用:根據url,映射到具體的host,context和wrapper。
具體流程
![](https://img.laitimes.com/img/_0nNw4CM6IyYiwiM6ICdiwiI9s2RkBnVHFmb1clWvB3MaVnRtp1XlBXe0xCMy81dvRWYoNHLwEzX5xCMx8FesU2cfdGLwMzX0xiRGZkRGZ0Xy9GbvNGLpZTY1EmMZVDUSFTU4VFRR9Fd4VGdsQTMfVmepNHLrJXYtJXZ0F2dvwVZnFWbp1zczV2YvJHctM3cv1Ce-cmbw5SOyMzNyQGOycjNxMTNiZzMzYzXzUTMycTMwMzLclDMyIDMy8CXn9Gbi9CXzV2Zh1WavwVbvNmLvR3YxUjLyM3Lc9CX6MHc0RHaiojIsJye.png)
請求連接配接和協定解析
- Connector元件的Endpoint中的Acceptor監聽用戶端套接字連接配接并接收Socket
- 将連接配接交給線程池Exectuor處理,開始執行請求響應任務
- Processor元件讀取到消息封包,解析請求行、請求頭、請求體,封裝成Request對象
請求路由和處理
- Mapper元件根據請求行的URL值和請求頭的Host值比對由那個host、context、wrapper容器進行處理
- CoyoteAdaptor元件負責将Connector元件和Engine容器适配關聯,把生成的Request對象和響應的Response對象傳遞給Engine容器,并調用Pipeline
- Engine容器的管道開始處理,管道中包含若幹個Valve,每個Valve負責部分處理邏輯,執行完Valve後會執行基礎的Valve – StandardEngineValve,負責調用Host容器的Pipeline
- Host容器的管道開始同上流程處理,最後執行Content容器的Pipeline
- Content容器的管道開始同上流程處理,最後執行Wrapper容器的Pipeline
- Wrapper容器的管道開始同上流程處理
- 最後執行Wrapper容器對應的Servlet對象的處理方法
總體結論
一個Service有一個Engine,而一個Engine中有一個Mapper。根據Engine,Host,Context及Wrapper的對應關系,易得到以下的結論。
- 一個Mapper中,應該儲存有多個Host對象(的确是這樣的,每個Host對象稱之為MappedHost,多個MappedHost以數組形式組合,各元素通過其name進行排序)
- 一個Host對象包含多個context(Context在Mapper中定義為MappedContext,也是通過數組形式存在,并且元素根據MappedContext的命名排序。但是與元件不同的是,每一個Context可以有多個版本,是以每一個MappedContext 包含了多個ContextVersion,每一個MappedContext下的多個ContextVersion表示同一個Context的多個版本)
- 一個Context包含多個Wrapper(此處的Context在Mapper中為ContextVersion,包含多個Wrapper,這些Wrapper分成四類,精确比對的Wrapper,字首比對的Wrapper,擴充名比對的Wrapper,預設的Wrapper,在Mapper中,每一個Wrapper都通過一個MappedWrapper表示)
是以,Mapper的構成可以用下圖表示
當一個請求到來時,Mapper元件通過解析請求URL裡的域名和路徑,再到自己儲存的
Map裡去查找,就能定位到一個Servlet。請你注意,一個請求URL最後隻會定位到一個
Wrapper容器,也就是一個Servlet。
從Tomcat的設計架構層面來分析Tomcat的請求處理。
步驟如下:
- 1、Connector元件Endpoint中的Acceptor監聽用戶端套接字連接配接并接收Socket。
- 2、将連接配接交給線程池Executor處理,開始執行請求響應任務。
- 3、Processor元件讀取消息封包,解析請求行、請求體、請求頭,封裝成Request對象。
- 4、Mapper元件根據請求行的URL值和請求頭的Host值比對由哪個Host容器、Context容器、Wrapper容器處理請求。
-
5、CoyoteAdaptor元件負責将Connector元件和Engine容器關聯起來,把生成的
Request對象和響應對象Response傳遞到Engine容器中,調用 Pipeline。
-
6、Engine容器的管道開始處理,管道中包含若幹個Valve、每個Valve負責部分處理邏
輯。執行完Valve後會執行基礎的 Valve--StandardEngineValve,負責調用Host容器的
Pipeline。
- 7、Host容器的管道開始處理,流程類似,最後執行 Context容器的Pipeline。
- 8、Context容器的管道開始處理,流程類似,最後執行 Wrapper容器的Pipeline。
- 9、Wrapper容器的管道開始處理,流程類似,最後執行 Wrapper容器對應的Servlet對象的處理方法。
Connector
<Connector port="8080" protocol="HTTP/1.1"
connectionTimeout="20000"
redirectPort="8443" />
<Connector port="8009" protocol="AJP/1.3" redirectPort="8443" />
Connector用于接收請求并将請求封裝成Request和Response來具體處理。最底層使用的是 Socket。Request和Response封裝後交給Container(Servlet的容器)處理請求,Container處理後傳回給Connector,最後由Socket傳回給用戶端。
結構
Connector中具體是用 ProtocolHandler處理請求的,代表不同的連接配接類型。Http11Protocol使用普通Socket連接配接,Http11NioProtocol使用NioSocket連接配接。
ProtocolHandler中有三個重要的元件:
- Endpoint:處理底層的Socket網絡連接配接;
- Processor:将Endpoint接收的Socket封裝成Request;
- Adapter:将封裝後的Request交給Container處理;
Endpoint的抽象類 AbstractEndpoint定義了 Acceptor和 AsyncTimeout兩個内部類 和 Handler接口。
- Acceptor:監聽請求;
- AsyncTimeout:檢查異步Request請求的逾時;
- Handler:處理接收的Socket,内部調用 Processor進行處理;
public class Connector extends LifecycleMBeanBase {
protected final String protocolHandlerClassName;
public Connector() {
this("org.apache.coyote.http11.Http11NioProtocol");
}
public Connector(String protocol) {
boolean aprConnector = AprLifecycleListener.isAprAvailable() &&
AprLifecycleListener.getUseAprConnector();
// 根據 server.xml 中 Connector 屬性 protocol 的值找合适的 className
// 此處傳回 org.apache.coyote.http11.Http11NioProtocol
// 指派給 ProtocolHandler
if ("HTTP/1.1".equals(protocol) || protocol == null) {
if (aprConnector) {
protocolHandlerClassName = "org.apache.coyote.http11.Http11AprProtocol";
} else {
protocolHandlerClassName = "org.apache.coyote.http11.Http11NioProtocol";
}
} else if ("AJP/1.3".equals(protocol)) {
if (aprConnector) {
protocolHandlerClassName = "org.apache.coyote.ajp.AjpAprProtocol";
} else {
protocolHandlerClassName = "org.apache.coyote.ajp.AjpNioProtocol";
}
} else {
protocolHandlerClassName = protocol;
}
// Instantiate protocol handler
ProtocolHandler p = null;
try {
Class<?> clazz = Class.forName(protocolHandlerClassName);
// 反射生成 Http11NioProtocol 時,在構造函數中 生成了 NioEndpoint。
p = (ProtocolHandler) clazz.getConstructor().newInstance();
} catch (Exception e) {
log.error(sm.getString(
"coyoteConnector.protocolHandlerInstantiationFailed"), e);
} finally {
this.protocolHandler = p;
}
// Default for Connector depends on this system property
setThrowOnFailure(Boolean.getBoolean("org.apache.catalina.startup.EXIT_ON_INIT_FAILURE"));
}
// 初始化操作
@Override
protected void initInternal() throws LifecycleException {
super.initInternal();
if (protocolHandler == null) {
throw new LifecycleException(
sm.getString("coyoteConnector.protocolHandlerInstantiationFailed"));
}
// Initialize adapter
// 初始化是配置 CoyoteAdapter(Connector 作為參數)
adapter = new CoyoteAdapter(this);
// 協定處理器綁定 擴充卡
protocolHandler.setAdapter(adapter);
if (service != null) {
protocolHandler.setUtilityExecutor(service.getServer().getUtilityExecutor());
}
// Make sure parseBodyMethodsSet has a default
if (null == parseBodyMethodsSet) {
setParseBodyMethods(getParseBodyMethods());
}
if (protocolHandler.isAprRequired() && !AprLifecycleListener.isInstanceCreated()) {
throw new LifecycleException(sm.getString("coyoteConnector.protocolHandlerNoAprListener",
getProtocolHandlerClassName()));
}
if (protocolHandler.isAprRequired() && !AprLifecycleListener.isAprAvailable()) {
throw new LifecycleException(sm.getString("coyoteConnector.protocolHandlerNoAprLibrary",
getProtocolHandlerClassName()));
}
if (AprLifecycleListener.isAprAvailable() && AprLifecycleListener.getUseOpenSSL() &&
protocolHandler instanceof AbstractHttp11JsseProtocol) {
AbstractHttp11JsseProtocol<?> jsseProtocolHandler =
(AbstractHttp11JsseProtocol<?>) protocolHandler;
if (jsseProtocolHandler.isSSLEnabled() &&
jsseProtocolHandler.getSslImplementationName() == null) {
// OpenSSL is compatible with the JSSE configuration, so use it if APR is available
jsseProtocolHandler.setSslImplementationName(OpenSSLImplementation.class.getName());
}
}
try {
// 執行 協定處理器初始化操作
protocolHandler.init();
} catch (Exception e) {
throw new LifecycleException(
sm.getString("coyoteConnector.protocolHandlerInitializationFailed"), e);
}
}
// 使用此 連接配接器處理 請求
@Override
protected void startInternal() throws LifecycleException {
// Validate settings before starting
// 驗證端口
if (getPortWithOffset() < 0) {
throw new LifecycleException(sm.getString(
"coyoteConnector.invalidPort", Integer.valueOf(getPortWithOffset())));
}
// 設定生命周期狀态值
setState(LifecycleState.STARTING);
try {
// 調用 協定處理器 start 方法
protocolHandler.start();
} catch (Exception e) {
throw new LifecycleException(
sm.getString("coyoteConnector.protocolHandlerStartFailed"), e);
}
}
}
Http11NioProtocol
主要包含 NioEndpoint元件和 Http11NioProcessor元件。啟動時由NioEndpoint元件啟動端口監聽,連接配接到來被注冊到NioChannel隊列中,由Poller輪詢器負責檢測Channel的讀寫事件,并在建立任務後放入線程池中,線程池進行任務處理。
public class Http11NioProtocol extends AbstractHttp11JsseProtocol<NioChannel> {
// 建立了 NioEndpoint 類
public Http11NioProtocol() {
// 最終調用 AbstractProtocol 類
super(new NioEndpoint());
}
}
// 父類 AbstractProtocol
public abstract class AbstractProtocol implements ProtocolHandler,MBeanRegistration {
@Override
public void init() throws Exception {
// endpointName---"http-nio-8080": http-nio + 8080 組成。
String endpointName = getName();
// 去掉 雙引号
endpoint.setName(endpointName.substring(1, endpointName.length()-1));
// domain == catalina
endpoint.setDomain(domain);
// Endpoint的初始化操作
// serverSocketChannel 綁定端口
endpoint.init();
}
@Override
public void start() throws Exception {
// Endpoint 的 start 方法,重點
// 建立 Executor、最大連接配接、開啟 Poller thread
endpoint.start();
// 異步逾時 線程,
asyncTimeout = new AsyncTimeout();
Thread timeoutThread = new Thread(asyncTimeout, getNameInternal() + "-AsyncTimeout");
int priority = endpoint.getThreadPriority();
if (priority < Thread.MIN_PRIORITY || priority > Thread.MAX_PRIORITY) {
priority = Thread.NORM_PRIORITY;
}
timeoutThread.setPriority(priority);
timeoutThread.setDaemon(true);
timeoutThread.start();
}
}
// AsyncTimeout 實作 Runnable 接口
protected class AsyncTimeout implements Runnable {
private volatile boolean asyncTimeoutRunning = true;
// 背景線程檢查異步請求并在沒有活動時觸發逾時。
@Override
public void run() {
// 死循環,直到接收到 shutdown 指令
while (asyncTimeoutRunning) {
Thread.sleep(1000);
long now = System.currentTimeMillis();
for (Processor processor : waitingProcessors) {
processor.timeoutAsync(now);
}
// 循環,直到 Endpoint pause
while (endpoint.isPaused() && asyncTimeoutRunning) {
Thread.sleep(1000);
}
}
}
}
請求流程源碼解析
在Tomcat的整體架構中,我們發現Tomcat中的各個元件各司其職,元件之間松耦合,確定了整體架構的可伸縮性和可拓展性,那麼在元件内部,如何增強元件的靈活性和拓展性呢? 在Tomcat中,每個Container元件采用責任鍊模式來完成具體的請求處理。
在Tomcat中定義了Pipeline 和 Valve 兩個接口,Pipeline 用于建構責任鍊, 後者代表責
任鍊上的每個處理器。Pipeline 中維護了一個基礎的Valve,它始終位于Pipeline的末端
(最後執行),封裝了具體的請求處理和輸出響應的過程。當然,我們也可以調用
addValve()方法, 為Pipeline 添加其他的Valve, 後添加的Valve 位于基礎的Valve之
前,并按照添加順序執行。Pipiline通過獲得首個Valve來啟動整合鍊條的執行 。
源碼入口
- org.apache.tomcat.util.net.NioEndpoint#startInternal (啟動完成,接收請求)
- org.apache.tomcat.util.net.NioEndpoint.Poller#run (處理接收的請求)
- org.apache.tomcat.util.net.NioEndpoint.Poller#processKey
- org.apache.tomcat.util.net.AbstractEndpoint#processSocket
執行AbstractEndpoint.processSocket
- org.apache.tomcat.util.net.AbstractEndpoint#processSocket
- 由Executor線程池執行業務SocketWrapperBase.run
- Connector元件的Endpoint中的Acceptor監聽用戶端套接字連接配接并接收Socket
public abstract class AbstractEndpoint<S,U> {
public boolean processSocket(SocketWrapperBase<S> socketWrapper,
SocketEvent event, boolean dispatch) {
try {
if (socketWrapper == null) {
return false;
}
SocketProcessorBase<S> sc = processorCache.pop();
if (sc == null) {
sc = createSocketProcessor(socketWrapper, event);
} else {
sc.reset(socketWrapper, event);
}
Executor executor = getExecutor();
if (dispatch && executor != null) {
executor.execute(sc);
} else {
sc.run();
}
} catch (RejectedExecutionException ree) {
getLog().warn(sm.getString("endpoint.executor.fail", socketWrapper) , ree);
return false;
} catch (Throwable t) {
ExceptionUtils.handleThrowable(t);
// This means we got an OOM or similar creating a thread, or that
// the pool and its queue are full
getLog().error(sm.getString("endpoint.process.fail"), t);
return false;
}
return true;
}
}
執行SocketWrapperBase.run
- org.apache.tomcat.util.net.SocketProcessorBase#run
-
調用doRun方法
○ 執行3次握手
○ 擷取handler并進行請求處理
- 将連接配接交給線程池Exectuor處理,開始執行請求響應任務
public abstract class SocketProcessorBase<S> implements Runnable {
protected SocketWrapperBase<S> socketWrapper;
protected SocketEvent event;
@Override
public final void run() {
synchronized (socketWrapper) {
// It is possible that processing may be triggered for read and
// write at the same time. The sync above makes sure that processing
// does not occur in parallel. The test below ensures that if the
// first event to be processed results in the socket being closed,
// the subsequent events are not processed.
if (socketWrapper.isClosed()) {
return;
}
doRun();
}
}
protected abstract void doRun();
}
public class NioEndpoint extends AbstractJsseEndpoint<NioChannel,SocketChannel> {
protected class SocketProcessor extends SocketProcessorBase<NioChannel> {
public SocketProcessor(SocketWrapperBase<NioChannel> socketWrapper, SocketEvent event) {
super(socketWrapper, event);
}
@Override
protected void doRun() {
NioChannel socket = socketWrapper.getSocket();
SelectionKey key = socket.getIOChannel().keyFor(socket.getPoller().getSelector());
try {
//執行3次握手
int handshake = -1;
try {
if (key != null) {
if (socket.isHandshakeComplete()) {
// No TLS handshaking required. Let the handler
// process this socket / event combination.
handshake = 0;
} else if (event == SocketEvent.STOP || event == SocketEvent.DISCONNECT ||
event == SocketEvent.ERROR) {
// Unable to complete the TLS handshake. Treat it as
// if the handshake failed.
handshake = -1;
} else {
handshake = socket.handshake(key.isReadable(), key.isWritable());
// The handshake process reads/writes from/to the
// socket. status may therefore be OPEN_WRITE once
// the handshake completes. However, the handshake
// happens when the socket is opened so the status
// must always be OPEN_READ after it completes. It
// is OK to always set this as it is only used if
// the handshake completes.
event = SocketEvent.OPEN_READ;
}
}
} catch (IOException x) {
handshake = -1;
if (log.isDebugEnabled()) log.debug("Error during SSL handshake",x);
} catch (CancelledKeyException ckx) {
handshake = -1;
}
if (handshake == 0) {
SocketState state = SocketState.OPEN;
// Process the request from this socket
if (event == null) {
state = getHandler().process(socketWrapper, SocketEvent.OPEN_READ);
} else {
//擷取handler并進行請求處理
state = getHandler().process(socketWrapper, event);
}
if (state == SocketState.CLOSED) {
close(socket, key);
}
} else if (handshake == -1 ) {
close(socket, key);
} else if (handshake == SelectionKey.OP_READ){
socketWrapper.registerReadInterest();
} else if (handshake == SelectionKey.OP_WRITE){
socketWrapper.registerWriteInterest();
}
} catch (CancelledKeyException cx) {
socket.getPoller().cancelledKey(key);
} catch (VirtualMachineError vme) {
ExceptionUtils.handleThrowable(vme);
} catch (Throwable t) {
log.error(sm.getString("endpoint.processing.fail"), t);
socket.getPoller().cancelledKey(key);
} finally {
socketWrapper = null;
event = null;
//return to cache
if (running && !paused) {
processorCache.push(this);
}
}
}
}
}
執行AbstractProtocol.process
- 擷取目前Processor
- 調用processor的process方法(在此Endpoint将請求的資訊交給Processor)
- Processor元件讀取到消息封包,解析請求行、請求頭、請求體,封裝成Request對象
public abstract class AbstractProtocol<S> implements ProtocolHandler,
MBeanRegistration {
public SocketState process(SocketWrapperBase<S> wrapper, SocketEvent status) {
S socket = wrapper.getSocket();
//拿到目前Processor
Processor processor = (Processor) wrapper.getCurrentProcessor();
//....
SocketState state = SocketState.CLOSED;
//調用processor的process方法
state = processor.process(wrapper, status);
//...
}
}
執行AbstractProcessorLight.process
- 執行dispatch或執行service方法
public abstract class AbstractProcessorLight implements Processor {
@Override
public SocketState process(SocketWrapperBase<?> socketWrapper, SocketEvent status)
throws IOException {
SocketState state = SocketState.CLOSED;
Iterator<DispatchType> dispatches = null;
do {
if (dispatches != null) {
DispatchType nextDispatch = dispatches.next();
//執行dispatch
state = dispatch(nextDispatch.getSocketStatus());
} else if (status == SocketEvent.DISCONNECT) {
// Do nothing here, just wait for it to get recycled
} else if (isAsync() || isUpgrade() || state == SocketState.ASYNC_END) {
//執行dispatch
state = dispatch(status);
if (state == SocketState.OPEN) {
// There may be pipe-lined data to read. If the data isn't
// processed now, execution will exit this loop and call
// release() which will recycle the processor (and input
// buffer) deleting any pipe-lined data. To avoid this,
// process it now.
state = service(socketWrapper);
}
} else if (status == SocketEvent.OPEN_WRITE) {
// Extra write event likely after async, ignore
state = SocketState.LONG;
} else if (status == SocketEvent.OPEN_READ){
//執行service方法
state = service(socketWrapper);
} else {
// Default to closing the socket if the SocketEvent passed in
// is not consistent with the current state of the Processor
state = SocketState.CLOSED;
}
if (getLog().isDebugEnabled()) {
getLog().debug("Socket: [" + socketWrapper +
"], Status in: [" + status +
"], State out: [" + state + "]");
}
if (state != SocketState.CLOSED && isAsync()) {
state = asyncPostProcess();
if (getLog().isDebugEnabled()) {
getLog().debug("Socket: [" + socketWrapper +
"], State after async post processing: [" + state + "]");
}
}
if (dispatches == null || !dispatches.hasNext()) {
// Only returns non-null iterator if there are
// dispatches to process.
dispatches = getIteratorAndClearDispatches();
}
} while (state == SocketState.ASYNC_END ||
dispatches != null && state != SocketState.CLOSED);
return state;
}
}public abstract class AbstractProcessorLight implements Processor {
@Override
public SocketState process(SocketWrapperBase<?> socketWrapper, SocketEvent status)
throws IOException {
SocketState state = SocketState.CLOSED;
Iterator<DispatchType> dispatches = null;
do {
if (dispatches != null) {
DispatchType nextDispatch = dispatches.next();
//執行dispatch
state = dispatch(nextDispatch.getSocketStatus());
} else if (status == SocketEvent.DISCONNECT) {
// Do nothing here, just wait for it to get recycled
} else if (isAsync() || isUpgrade() || state == SocketState.ASYNC_END) {
//執行dispatch
state = dispatch(status);
if (state == SocketState.OPEN) {
// There may be pipe-lined data to read. If the data isn't
// processed now, execution will exit this loop and call
// release() which will recycle the processor (and input
// buffer) deleting any pipe-lined data. To avoid this,
// process it now.
state = service(socketWrapper);
}
} else if (status == SocketEvent.OPEN_WRITE) {
// Extra write event likely after async, ignore
state = SocketState.LONG;
} else if (status == SocketEvent.OPEN_READ){
//執行service方法
state = service(socketWrapper);
} else {
// Default to closing the socket if the SocketEvent passed in
// is not consistent with the current state of the Processor
state = SocketState.CLOSED;
}
if (getLog().isDebugEnabled()) {
getLog().debug("Socket: [" + socketWrapper +
"], Status in: [" + status +
"], State out: [" + state + "]");
}
if (state != SocketState.CLOSED && isAsync()) {
state = asyncPostProcess();
if (getLog().isDebugEnabled()) {
getLog().debug("Socket: [" + socketWrapper +
"], State after async post processing: [" + state + "]");
}
}
if (dispatches == null || !dispatches.hasNext()) {
// Only returns non-null iterator if there are
// dispatches to process.
dispatches = getIteratorAndClearDispatches();
}
} while (state == SocketState.ASYNC_END ||
dispatches != null && state != SocketState.CLOSED);
return state;
}
}
執行Http11Processor.service
- org.apache.coyote.http11.Http11Processor#service
- 解析請求行、請求頭、請求體,封裝成Request對象
- 擷取adapter(CoyoteAdaptor),調用service方法
- CoyoteAdaptor元件負責将Connector元件和Engine容器适配關聯
public class Http11Processor extends AbstractProcessor {
public SocketState service(SocketWrapperBase<?> socketWrapper) throws IOException {
//......
//擷取adapter(CoyoteAdaptor),調用service方法
getAdapter().service(request, response);
//......
}
}
執行CoyoteAdaptor.service
- 擷取到Request和Response對象
- 調用容器,把生成的Request對象和響應的Response對象傳遞給Engine容器
public class CoyoteAdapter implements Adapter {
@Override
public void service(org.apache.coyote.Request req, org.apache.coyote.Response res)
throws Exception {
//擷取到Request和Response對象
Request request = (Request) req.getNote(ADAPTER_NOTES);
Response response = (Response) res.getNote(ADAPTER_NOTES);
//......
postParseSuccess = postParseRequest(req, request, res, response);
if (postParseSuccess) {
//check valves if we support async
request.setAsyncSupported(
connector.getService().getContainer().getPipeline().isAsyncSupported());
// Calling the container
//調用容器,擷取Engin管道閥Valve
connector.getService().getContainer().getPipeline().getFirst().invoke(
request, response);
}
//......
}
}
執行責任鍊xxxValve.invoke
- 執行StandardServiceValve.invoke
- a. 擷取Host,調用Host容器的Pipeline
- b. Engine容器的管道開始處理,管道中包含若幹個Valve,每個Valve負責部分處理邏輯,執行完Valve後會執行基礎的Valve
- 執行StandardHostValve.invoke
- a. 擷取Context,調用Context容器的Pipeline
- 執行StandardContextValve.invoke
- a. 擷取Wrapper,調用Wrapper容器的Pipeline
擷取Servlet – Wrapper.invoke
- 從Container中擷取Wrapper
- 從Wrapper中擷取到Servlet,至此拿到的就是具體業務Servlet
- 将Servlet封裝到構造的FilterChain過濾器鍊中
- 執行過濾器鍊中的filter,并傳入ServletRequest和ServletResponse
final class StandardWrapperValve extends ValveBase {
@Override
public final void invoke(Request request, Response response)
throws IOException, ServletException {
//......
//從Container中擷取Wrapper
StandardWrapper wrapper = (StandardWrapper) getContainer();
Servlet servlet = null;
Context context = (Context) wrapper.getParent();
//......
//擷取到servlet
servlet = wrapper.allocate();
//将Servlet封裝到FilterChain過濾器鍊中
ApplicationFilterChain filterChain = ApplicationFilterFactory.createFilterChain(request, wrapper, servlet);
//......
//執行過濾器鍊中的filter
filterChain.doFilter(request.getRequest(),response.getResponse());
//......
}
}
執行過濾器鍊中的filter
- 擷取到ServletRequest、ServletResponse執行過濾器
- 調用servlet.service方法
-
調用HttpServlet.service
○ 執行doGet、doPost方法
public final class ApplicationFilterChain implements FilterChain {
@Override
public void doFilter(ServletRequest request, ServletResponse response)
throws IOException, ServletException {
//......
internalDoFilter(request,response);
}
private void internalDoFilter(ServletRequest request, ServletResponse response)
throws IOException, ServletException {
//......
//執行過濾器
//調用Servlet.service方法
servlet.service(request, response);
//......
}
}
執行HttpServlet.service
- 擷取請求方式
- 執行對應的請求方法,如:doPost、doGet等
public abstract class HttpServlet extends GenericServlet {
protected void service(HttpServletRequest req, HttpServletResponse resp)
throws ServletException, IOException {
//擷取請求方式
String method = req.getMethod();
//執行對應的請求方法
if (method.equals(METHOD_GET)) {
long lastModified = getLastModified(req);
if (lastModified == -1) {
// servlet doesn't support if-modified-since, no reason
// to go through further expensive logic
doGet(req, resp);
} else {
long ifModifiedSince;
try {
ifModifiedSince = req.getDateHeader(HEADER_IFMODSINCE);
} catch (IllegalArgumentException iae) {
// Invalid date header - proceed as if none was set
ifModifiedSince = -1;
}
if (ifModifiedSince < (lastModified / 1000 * 1000)) {
// If the servlet mod time is later, call doGet()
// Round down to the nearest second for a proper compare
// A ifModifiedSince of -1 will always be less
maybeSetLastModified(resp, lastModified);
doGet(req, resp);
} else {
resp.setStatus(HttpServletResponse.SC_NOT_MODIFIED);
}
}
} else if (method.equals(METHOD_HEAD)) {
long lastModified = getLastModified(req);
maybeSetLastModified(resp, lastModified);
doHead(req, resp);
} else if (method.equals(METHOD_POST)) {
doPost(req, resp);
} else if (method.equals(METHOD_PUT)) {
doPut(req, resp);
} else if (method.equals(METHOD_DELETE)) {
doDelete(req, resp);
} else if (method.equals(METHOD_OPTIONS)) {
doOptions(req,resp);
} else if (method.equals(METHOD_TRACE)) {
doTrace(req,resp);
} else {
//
// Note that this means NO servlet supports whatever
// method was requested, anywhere on this server.
//
String errMsg = lStrings.getString("http.method_not_implemented");
Object[] errArgs = new Object[1];
errArgs[0] = method;
errMsg = MessageFormat.format(errMsg, errArgs);
resp.sendError(HttpServletResponse.SC_NOT_IMPLEMENTED, errMsg);
}
}
}
Http長連接配接的支援
- 用戶端請求會攜帶Connection: keep-alive
- service方法會解析,并使用keepAlive參數記錄是否滿足長連接配接條件
- 不滿足長連接配接條件則傳回的響應頭攜帶Connection: close,表示關閉連接配接
- 滿足長連接配接條件則不傳回Connection參數
public class Http11Processor extends AbstractProcessor {
@Override
public SocketState service(SocketWrapperBase<?> socketWrapper)
throws IOException {
RequestInfo rp = request.getRequestProcessor();
rp.setStage(org.apache.coyote.Constants.STAGE_PARSE);
// Setting up the I/O
setSocketWrapper(socketWrapper);
inputBuffer.init(socketWrapper);
outputBuffer.init(socketWrapper);
// Flags
keepAlive = true;
openSocket = false;
readComplete = true;
boolean keptAlive = false;
SendfileState sendfileState = SendfileState.DONE;
//控制tomcat長連接配接原理 while循環 keepAlive
while (!getErrorState().isError() && keepAlive && !isAsync() && upgradeToken == null &&
sendfileState == SendfileState.DONE && !protocol.isPaused()) {
// Parsing the request header
try {
if (!inputBuffer.parseRequestLine(keptAlive, protocol.getConnectionTimeout(),
protocol.getKeepAliveTimeout())) {
if (inputBuffer.getParsingRequestLinePhase() == -1) {
return SocketState.UPGRADING;
} else if (handleIncompleteRequestLineRead()) {
break;
}
}
if (protocol.isPaused()) {
// 503 - Service unavailable
response.setStatus(503);
setErrorState(ErrorState.CLOSE_CLEAN, null);
} else {
keptAlive = true;
// Set this every time in case limit has been changed via JMX
request.getMimeHeaders().setLimit(protocol.getMaxHeaderCount());
if (!inputBuffer.parseHeaders()) {
// We've read part of the request, don't recycle it
// instead associate it with the socket
openSocket = true;
readComplete = false;
break;
}
if (!protocol.getDisableUploadTimeout()) {
socketWrapper.setReadTimeout(protocol.getConnectionUploadTimeout());
}
}
} catch (IOException e) {
if (log.isDebugEnabled()) {
log.debug(sm.getString("http11processor.header.parse"), e);
}
setErrorState(ErrorState.CLOSE_CONNECTION_NOW, e);
break;
} catch (Throwable t) {
ExceptionUtils.handleThrowable(t);
UserDataHelper.Mode logMode = userDataHelper.getNextMode();
if (logMode != null) {
String message = sm.getString("http11processor.header.parse");
switch (logMode) {
case INFO_THEN_DEBUG:
message += sm.getString("http11processor.fallToDebug");
//$FALL-THROUGH$
case INFO:
log.info(message, t);
break;
case DEBUG:
log.debug(message, t);
}
}
// 400 - Bad Request
response.setStatus(400);
setErrorState(ErrorState.CLOSE_CLEAN, t);
}
// Has an upgrade been requested?
Enumeration<String> connectionValues = request.getMimeHeaders().values("Connection");
boolean foundUpgrade = false;
while (connectionValues.hasMoreElements() && !foundUpgrade) {
foundUpgrade = connectionValues.nextElement().toLowerCase(
Locale.ENGLISH).contains("upgrade");
}
if (foundUpgrade) {
// Check the protocol
String requestedProtocol = request.getHeader("Upgrade");
UpgradeProtocol upgradeProtocol = protocol.getUpgradeProtocol(requestedProtocol);
if (upgradeProtocol != null) {
if (upgradeProtocol.accept(request)) {
response.setStatus(HttpServletResponse.SC_SWITCHING_PROTOCOLS);
response.setHeader("Connection", "Upgrade");
response.setHeader("Upgrade", requestedProtocol);
action(ActionCode.CLOSE, null);
getAdapter().log(request, response, 0);
InternalHttpUpgradeHandler upgradeHandler =
upgradeProtocol.getInternalUpgradeHandler(
socketWrapper, getAdapter(), cloneRequest(request));
UpgradeToken upgradeToken = new UpgradeToken(upgradeHandler, null, null);
action(ActionCode.UPGRADE, upgradeToken);
return SocketState.UPGRADING;
}
}
}
if (getErrorState().isIoAllowed()) {
// Setting up filters, and parse some request headers
rp.setStage(org.apache.coyote.Constants.STAGE_PREPARE);
try {
prepareRequest();
} catch (Throwable t) {
ExceptionUtils.handleThrowable(t);
if (log.isDebugEnabled()) {
log.debug(sm.getString("http11processor.request.prepare"), t);
}
// 500 - Internal Server Error
response.setStatus(500);
setErrorState(ErrorState.CLOSE_CLEAN, t);
}
}
//最大活躍的http請求數量
int maxKeepAliveRequests = protocol.getMaxKeepAliveRequests();
if (maxKeepAliveRequests == 1) {
keepAlive = false;
} else if (maxKeepAliveRequests > 0 &&
socketWrapper.decrementKeepAlive() <= 0) {
keepAlive = false;
}
// Process the request in the adapter
if (getErrorState().isIoAllowed()) {
try {
rp.setStage(org.apache.coyote.Constants.STAGE_SERVICE);
getAdapter().service(request, response);
// Handle when the response was committed before a serious
// error occurred. Throwing a ServletException should both
// set the status to 500 and set the errorException.
// If we fail here, then the response is likely already
// committed, so we can't try and set headers.
if(keepAlive && !getErrorState().isError() && !isAsync() &&
statusDropsConnection(response.getStatus())) {
setErrorState(ErrorState.CLOSE_CLEAN, null);
}
} catch (InterruptedIOException e) {
setErrorState(ErrorState.CLOSE_CONNECTION_NOW, e);
} catch (HeadersTooLargeException e) {
log.error(sm.getString("http11processor.request.process"), e);
// The response should not have been committed but check it
// anyway to be safe
if (response.isCommitted()) {
setErrorState(ErrorState.CLOSE_NOW, e);
} else {
response.reset();
response.setStatus(500);
setErrorState(ErrorState.CLOSE_CLEAN, e);
response.setHeader("Connection", "close"); // TODO: Remove
}
} catch (Throwable t) {
ExceptionUtils.handleThrowable(t);
log.error(sm.getString("http11processor.request.process"), t);
// 500 - Internal Server Error
response.setStatus(500);
setErrorState(ErrorState.CLOSE_CLEAN, t);
getAdapter().log(request, response, 0);
}
}
// Finish the handling of the request
rp.setStage(org.apache.coyote.Constants.STAGE_ENDINPUT);
if (!isAsync()) {
// If this is an async request then the request ends when it has
// been completed. The AsyncContext is responsible for calling
// endRequest() in that case.
endRequest();
}
rp.setStage(org.apache.coyote.Constants.STAGE_ENDOUTPUT);
// If there was an error, make sure the request is counted as
// and error, and update the statistics counter
if (getErrorState().isError()) {
response.setStatus(500);
}
if (!isAsync() || getErrorState().isError()) {
request.updateCounters();
if (getErrorState().isIoAllowed()) {
inputBuffer.nextRequest();
outputBuffer.nextRequest();
}
}
if (!protocol.getDisableUploadTimeout()) {
int connectionTimeout = protocol.getConnectionTimeout();
if(connectionTimeout > 0) {
socketWrapper.setReadTimeout(connectionTimeout);
} else {
socketWrapper.setReadTimeout(0);
}
}
rp.setStage(org.apache.coyote.Constants.STAGE_KEEPALIVE);
sendfileState = processSendfile(socketWrapper);
}
rp.setStage(org.apache.coyote.Constants.STAGE_ENDED);
if (getErrorState().isError() || protocol.isPaused()) {
return SocketState.CLOSED;
} else if (isAsync()) {
return SocketState.LONG;
} else if (isUpgrade()) {
return SocketState.UPGRADING;
} else {
if (sendfileState == SendfileState.PENDING) {
return SocketState.SENDFILE;
} else {
if (openSocket) {
if (readComplete) {
return SocketState.OPEN;
} else {
return SocketState.LONG;
}
} else {
return SocketState.CLOSED;
}
}
}
}
}
參考:
https://www.cnblogs.com/wansw/p/10244039.html
https://blog.csdn.net/nblife0000/article/details/60364847
https://blog.csdn.net/jiaomingliang/article/details/47414657
https://blog.csdn.net/nmjhehe/article/details/115533383