天天看點

java啟動時報線程占用!Exception in thread “Thread-14“ java.net.BindException: Address already in use: bind前言1 異常2 問題定位3 問題原因4 思考學習

前言

本文提供三種不同的解決方式,也是三種不同的情況和思路

我的問題是在springboot整合了xxl-job一段時間後出現的。如果你程式裡內建了xxl-job或者有需要配置其它端口的地方,這篇文章或許可以給你帶來啟發或者解決你的問題。

目錄标題

  • 前言
  • 1 異常
  • 2 問題定位
    • 2.1 第一種情況
    • 2.2 第二種情況
    • 2.3 第三種情況
  • 3 問題原因
  • 4 思考學習

1 異常

啟動項目後抛出異常,但是奇怪的是執行器在任務排程中心中注冊成功,也能成功執行

.   ____          _            __ _ _
 /\\ / ___'_ __ _ _(_)_ __  __ _ \ \ \ \
( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \
 \\/  ___)| |_)| | | | | || (_| |  ) ) ) )
  '  |____| .__|_| |_|_| |_\__, | / / / /
 =========|_|==============|___/=/_/_/_/
 :: Spring Boot ::        (v2.2.2.RELEASE)

2023-02-14 11:22:15.516  INFO 4436 --- [           main] com.jxj.SafetyWebserverApplication       : Starting SafetyWebserverApplication on abc with PID 4436 (C:\project\safetyproduction_collectdata\target\classes started by whx in C:\project\safetyproduction_collectdata)
2023-02-14 11:22:15.521  INFO 4436 --- [           main] com.jxj.SafetyWebserverApplication       : No active profile set, falling back to default profiles: default
2023-02-14 11:22:16.722  INFO 4436 --- [           main] .s.d.r.c.RepositoryConfigurationDelegate : Multiple Spring Data modules found, entering strict repository configuration mode!
....

2023-02-14 11:22:19.191  INFO 4436 --- [           main] o.s.s.c.ThreadPoolTaskScheduler          : Initializing ExecutorService 'taskScheduler'
2023-02-14 11:22:19.394  INFO 4436 --- [           main] com.jxj.config.XxlJobConfig              : >>>>>>>>>>> xxl-job config init.
2023-02-14 11:22:19.666  INFO 4436 --- [           main] o.s.s.concurrent.ThreadPoolTaskExecutor  : Initializing ExecutorService 'applicationTaskExecutor'
2023-02-14 11:22:20.271  INFO 4436 --- [           main] c.xxl.job.core.executor.XxlJobExecutor   : >>>>>>>>>>> xxl-job register jobhandler success, name:demoJobHandler, jobHandler:com.xxl.job.core.handler.impl.MethodJobHandler@c6bf8d9[class com.jxj.task.WarningTask#demoJobHandler]
2023-02-14 11:22:20.585  INFO 4436 --- [           main] o.s.i.endpoint.EventDrivenConsumer       : Adding {logging-channel-adapter:_org.springframework.integration.errorLogger} as a subscriber to the 'errorChannel' channel
2023-02-14 11:22:20.586  INFO 4436 --- [           main] o.s.i.channel.PublishSubscribeChannel    : Channel 'application.errorChannel' has 1 subscriber(s).
2023-02-14 11:22:20.586  INFO 4436 --- [           main] o.s.i.endpoint.EventDrivenConsumer       : started bean '_org.springframework.integration.errorLogger'
2023-02-14 11:22:20.586  INFO 4436 --- [           main] o.s.i.endpoint.EventDrivenConsumer       : Adding {message-handler:mqttPublisherConfig.mqttOutbound.serviceActivator} as a subscriber to the 'mqttOutboundChannel' channel
2023-02-14 11:22:20.586  INFO 4436 --- [           main] o.s.integration.channel.DirectChannel    : Channel 'application.mqttOutboundChannel' has 1 subscriber(s).
2023-02-14 11:22:20.600  INFO 4436 --- [           main] o.s.i.endpoint.EventDrivenConsumer       : started bean 'mqttPublisherConfig.mqttOutbound.serviceActivator'
2023-02-14 11:22:20.600  INFO 4436 --- [           main] o.s.i.endpoint.EventDrivenConsumer       : Adding {message-handler:mqttSenderConfig.mqttOutbound.serviceActivator} as a subscriber to the 'mqttOutboundChannel1' channel
2023-02-14 11:22:20.601  INFO 4436 --- [           main] o.s.integration.channel.DirectChannel    : Channel 'application.mqttOutboundChannel1' has 1 subscriber(s).
2023-02-14 11:22:20.601  INFO 4436 --- [           main] o.s.i.endpoint.EventDrivenConsumer       : started bean 'mqttSenderConfig.mqttOutbound.serviceActivator'
2023-02-14 11:22:20.601  INFO 4436 --- [           main] o.s.i.endpoint.EventDrivenConsumer       : Adding {message-handler:mqttSubscriberConfig.handler.serviceActivator} as a subscriber to the 'mqttInboundChannel' channel
2023-02-14 11:22:20.601  INFO 4436 --- [           main] o.s.integration.channel.DirectChannel    : Channel 'application.mqttInboundChannel' has 1 subscriber(s).
2023-02-14 11:22:20.601  INFO 4436 --- [           main] o.s.i.endpoint.EventDrivenConsumer       : started bean 'mqttSubscriberConfig.handler.serviceActivator'
2023-02-14 11:22:20.601  INFO 4436 --- [           main] ProxyFactoryBean$MethodInvocationGateway : started bean 'mqttGateway'
2023-02-14 11:22:20.601  INFO 4436 --- [           main] ProxyFactoryBean$MethodInvocationGateway : started bean 'mqttGateway'
2023-02-14 11:22:20.601  INFO 4436 --- [           main] ProxyFactoryBean$MethodInvocationGateway : started bean 'mqttGateway'
2023-02-14 11:22:20.601  INFO 4436 --- [           main] o.s.i.gateway.GatewayProxyFactoryBean    : started bean 'mqttGateway'
Exception in thread "Thread-17" java.net.BindException: Address already in use: bind
	at sun.nio.ch.Net.bind0(Native Method)
	at sun.nio.ch.Net.bind(Net.java:433)
	at sun.nio.ch.Net.bind(Net.java:425)
	at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:223)
	at io.netty.channel.socket.nio.NioServerSocketChannel.doBind(NioServerSocketChannel.java:134)
	at io.netty.channel.AbstractChannel$AbstractUnsafe.bind(AbstractChannel.java:551)
	at io.netty.channel.DefaultChannelPipeline$HeadContext.bind(DefaultChannelPipeline.java:1346)
	at io.netty.channel.AbstractChannelHandlerContext.invokeBind(AbstractChannelHandlerContext.java:503)
	at io.netty.channel.AbstractChannelHandlerContext.bind(AbstractChannelHandlerContext.java:488)
	at io.netty.channel.DefaultChannelPipeline.bind(DefaultChannelPipeline.java:985)
	at io.netty.channel.AbstractChannel.bind(AbstractChannel.java:247)
	at io.netty.bootstrap.AbstractBootstrap$2.run(AbstractBootstrap.java:344)
	at io.netty.util.concurrent.AbstractEventExecutor.safeExecute$$$capture(AbstractEventExecutor.java:163)
	at io.netty.util.concurrent.AbstractEventExecutor.safeExecute(AbstractEventExecutor.java)
	at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:510)
	at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:518)
	at io.netty.util.concurrent.SingleThreadEventExecutor$6.run(SingleThreadEventExecutor.java:1050)
	at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
	at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
	at java.lang.Thread.run(Thread.java:748)
2023-02-14 11:22:21.444  INFO 4436 --- [           main] .m.i.MqttPahoMessageDrivenChannelAdapter : started bean 'inbound'; defined in: 'class path resource [com/jxj/config/MqttSubscriberConfig.class]'; from source: 'org.springframework.core.type.classreading.SimpleMethodMetadata@1d4664d7'
2023-02-14 11:22:21.446  INFO 4436 --- [           main] o.s.a.r.c.CachingConnectionFactory       : Attempting to connect to: [localhost:5672]
2023-02-14 11:22:25.537  INFO 4436 --- [           main] o.s.a.r.l.SimpleMessageListenerContainer : Broker not available; cannot force queue declarations during start: java.net.ConnectException: Connection refused: connect
2023-02-14 11:22:25.545  INFO 4436 --- [ntContainer#0-1] o.s.a.r.c.CachingConnectionFactory       : Attempting to connect to: [localhost:5672]

           

2 問題定位

2.1 第一種情況

網上有的說通過

在低版本的

xxl-job

中, 初始化

XxlJobSpringExecutor

執行器需要在

@Bean

中加上

initMethod = "start", destroyMethod = "destroy",

但是在高版本的

xxl-job

(如

2.1.2

)則需要删除

initMethod = "start", destroyMethod = "destroy"

而我的問題不是在bean上加(initMethod = “start”, destroyMethod = “destroy”)

,我加上之後會報兩遍線程被使用的異常。

Exception in thread "Thread-14" java.net.BindException: Address already in use: bind
	at sun.nio.ch.Net.bind0(Native Method)
	at sun.nio.ch.Net.bind(Net.java:433)
	at sun.nio.ch.Net.bind(Net.java:425)
	at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:223)
	at io.netty.channel.socket.nio.NioServerSocketChannel.doBind(NioServerSocketChannel.java:134)
	at io.netty.channel.AbstractChannel$AbstractUnsafe.bind(AbstractChannel.java:551)
	at io.netty.channel.DefaultChannelPipeline$HeadContext.bind(DefaultChannelPipeline.java:1346)
	at io.netty.channel.AbstractChannelHandlerContext.invokeBind(AbstractChannelHandlerContext.java:503)
	at io.netty.channel.AbstractChannelHandlerContext.bind(AbstractChannelHandlerContext.java:488)
	at io.netty.channel.DefaultChannelPipeline.bind(DefaultChannelPipeline.java:985)
	at io.netty.channel.AbstractChannel.bind(AbstractChannel.java:247)
	at io.netty.bootstrap.AbstractBootstrap$2.run(AbstractBootstrap.java:344)
	at io.netty.util.concurrent.AbstractEventExecutor.safeExecute$$$capture(AbstractEventExecutor.java:163)
	at io.netty.util.concurrent.AbstractEventExecutor.safeExecute(AbstractEventExecutor.java)
	at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:510)
	at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:518)
	at io.netty.util.concurrent.SingleThreadEventExecutor$6.run(SingleThreadEventExecutor.java:1050)
	at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
	at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
	at java.lang.Thread.run(Thread.java:748)
2023-02-14 11:25:20.568  INFO 18140 --- [           main] c.xxl.job.core.executor.XxlJobExecutor   : >>>>>>>>>>> xxl-job register jobhandler success, name:demoJobHandler, jobHandler:com.xxl.job.core.handler.impl.MethodJobHandler@1618c98a[class com.jxj.task.WarningTask#demoJobHandler]
Exception in thread "Thread-20" java.net.BindException: Address already in use: bind
	at sun.nio.ch.Net.bind0(Native Method)
	at sun.nio.ch.Net.bind(Net.java:433)
	at sun.nio.ch.Net.bind(Net.java:425)
	at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:223)
	at io.netty.channel.socket.nio.NioServerSocketChannel.doBind(NioServerSocketChannel.java:134)
	at io.netty.channel.AbstractChannel$AbstractUnsafe.bind(AbstractChannel.java:551)
	at io.netty.channel.DefaultChannelPipeline$HeadContext.bind(DefaultChannelPipeline.java:1346)
	at io.netty.channel.AbstractChannelHandlerContext.invokeBind(AbstractChannelHandlerContext.java:503)
	at io.netty.channel.AbstractChannelHandlerContext.bind(AbstractChannelHandlerContext.java:488)
	at io.netty.channel.DefaultChannelPipeline.bind(DefaultChannelPipeline.java:985)
	at io.netty.channel.AbstractChannel.bind(AbstractChannel.java:247)
	at io.netty.bootstrap.AbstractBootstrap$2.run(AbstractBootstrap.java:344)
	at io.netty.util.concurrent.AbstractEventExecutor.safeExecute$$$capture(AbstractEventExecutor.java:163)
	at io.netty.util.concurrent.AbstractEventExecutor.safeExecute(AbstractEventExecutor.java)
	at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:510)
	at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:518)
	at io.netty.util.concurrent.SingleThreadEventExecutor$6.run(SingleThreadEventExecutor.java:1050)
	at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
	at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
	at java.lang.Thread.run(Thread.java:748)
           
//原注解:
@Bean
public XxlJobSpringExecutor xxlJobExecutor() {
	...
	}
	
//修改後注解
@Bean(initMethod = "start", destroyMethod = "destroy")
public XxlJobSpringExecutor xxlJobExecutor() {
	...
	}
	
           

2.2 第二種情況

後來我發現我的問題是

本地和線上的程式連接配接了相同的xxl-job,并且連接配接xxl-job的端口是一樣的,導緻了這個問題!

xxl:
  job:
    accessToken: xxx
    admin:
#       addresses: http://1xxxxx/xxl-job-admin
      addresses: http://39xxxxx/xxl-job-admin
    executor:
      address: 'xxx'
      appname: safexxxxxst #這個名字要和頁面配置的一緻
      ip: ''
      logpath: /datxxxxjob/joxxxler
      logretentiondays: 30
      port: 9996

           

就是這個 port端口重複導緻的問題修改一下即可

port: 9996

2.3 第三種情況

本來我是第二種情況已經解決了,結果下午又報這個錯了,是以有了第三種情況的解決。

啟動報錯時的 錯誤關鍵日志:

2023-02-14 13:59:12.074  INFO 8364 --- [           main] o.s.a.r.c.CachingConnectionFactory       : Created new connection: rabbitConnectionFactory#131ba005:0/SimpleConnection@5981f2c6 [delegate=amqp://[email protected]:5673/, localPort= 5415]
Exception in thread "Thread-21" java.net.BindException: Address already in use: bind
	at sun.nio.ch.Net.bind0(Native Method)
	at sun.nio.ch.Net.bind(Net.java:433)
	at sun.nio.ch.Net.bind(Net.java:425)
	at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:223)
	at io.netty.channel.socket.nio.NioServerSocketChannel.doBind(NioServerSocketChannel.java:134)
	at io.netty.channel.AbstractChannel$AbstractUnsafe.bind(AbstractChannel.java:551)
	at io.netty.channel.DefaultChannelPipeline$HeadContext.bind(DefaultChannelPipeline.java:1346)
	at io.netty.channel.AbstractChannelHandlerContext.invokeBind(AbstractChannelHandlerContext.java:503)
	at io.netty.channel.AbstractChannelHandlerContext.bind(AbstractChannelHandlerContext.java:488)
	at io.netty.channel.DefaultChannelPipeline.bind(DefaultChannelPipeline.java:985)
	at io.netty.channel.AbstractChannel.bind(AbstractChannel.java:247)
	at io.netty.bootstrap.AbstractBootstrap$2.run(AbstractBootstrap.java:344)
	at 

           

由于我已經解決過一次,是以我對

端口比較敏感

(大家看完後面的分析就可以知道我為什麼敏感),就在yaml檔案種搜了一下所有的 port 一共有五處。

排除項目的port(

項目的接口沖突會直接報錯,停止運作

排除xxl-job (

第二種情況沖突已經解決

剩下的是內建的 redis,elasticsearch,rabbitmq

redis和elasticsearch我都沒有開啟,問題就隻在

rabbitmq

了。

rabbitmq我是本地起的docker,來連接配接測試的。

然後我有認真看了一下日志,就是上面貼出來的第一行

[delegate=amqp://[email protected]:5673/, localPort= 5415]

也就是列印完這個日志後報的錯誤,localPort= 5415,于是我又在本地檢視了一下這個5415端口使用情況

netstat -aon|findstr 5415

果然 兩個不同的線程在用!

C:\Uxxxs\1xx0>netstat -aon|findstr 5415
  TCP    127.0.0.1:5415         127.0.0.1:5673         ESTABLISHED     8364
  TCP    127.0.0.1:5673         127.0.0.1:5415         ESTABLISHED     4136
           

然後我打開任務管理器 詳細資訊,找到4136是daocker

java啟動時報線程占用!Exception in thread “Thread-14“ java.net.BindException: Address already in use: bind前言1 異常2 問題定位3 問題原因4 思考學習

我重新開機了一下電腦,,解決了

3 問題原因

程式啟動之後重新啟動了一個線程去連接配接xxl-job的端口,但是這個端口已經被占用了,是以程式就直接傳回了一個這個線程被占用了。

4 思考學習

服務建立監聽的時候,如果端口有LISTENING、ESTABLISHED、TIME_WAIT等,好像都會報錯。 可以研究下原理

TCP狀态轉移要點

TCP協定規定,對于已經建立的連接配接,網絡雙方要進行四次握手才能成功斷開連接配接,如果缺少了其中某個步驟,将會使連接配接處于假死狀态,連接配接本身占用的資源不會被釋放。網絡伺服器程式要同時管理大量連接配接,是以很有必要保證無用連接配接完全斷開,否則大量僵死的連接配接會浪費許多伺服器資源。在衆多TCP狀态中,最值得注意的狀态有兩個:CLOSE_WAIT和TIME_WAIT。

1、LISTENING狀态

  FTP服務啟動後首先處于偵聽(LISTENING)狀态。

2、ESTABLISHED狀态

  ESTABLISHED的意思是建立連接配接。表示兩台機器正在通信。

3、TIME_WAIT

我方主動調用close()斷開連接配接,收到對方确認後狀态變為TIME_WAIT。TCP協定規定TIME_WAIT狀态會一直持續2MSL(即兩倍的分段最大生存期),以此來確定舊的連接配接狀态不會對新連接配接産生影響。處于TIME_WAIT狀态的連接配接占用的資源不會被核心釋放,是以作為伺服器,在可能的情況下,盡量不要主動斷開連接配接,以減少TIME_WAIT狀态造成的資源浪費。

目前有一種避免TIME_WAIT資源浪費的方法,就是關閉socket的LINGER選項。但這種做法是TCP協定不推薦使用的,在某些情況下這個操作可能會帶來錯誤。