天天看點

shardingsphere-4.1.1-項目啟動緩慢之中繼資料加載流程分析

作者:言沫東

一、背景

為緩解海量資料通路的性能瓶頸,提高系統高并發能力,項目接入分布式資料庫中間件ShardingSphere;突然有一天,開始出現一些莫名其妙的的問題:項目啟動緩慢、有時啟動失敗、甚者項目釋出失敗等等。

"什麼代碼都沒改,就是在開發庫刷了分表結構,怎麼項目啟動不起來了......"

shardingsphere-4.1.1-項目啟動緩慢之中繼資料加載流程分析

二、探尋

  1. 排查方法
  • 分析項目啟動日志

仔細分析項目啟動日志,發現SchemaMetaDataLoader類第70行,在Loading 1800 tables' meta data 時,耗時689s......

1800張表正好是我們資料庫中的所有表......

2023-06-26 09:15:35,372 INFO (ShardingMetaDataLoader.java:131)- Loading 1 logic tables' meta data.

2023-06-26 09:15:35,665 INFO (SchemaMetaDataLoader.java:70)- Loading 1800 tables' meta data.

2023-06-26 09:15:04,473 INFO (MultipleDataSourcesRuntimeContext.java:59)- Meta data load finished, cost 689209 milliseconds.

  • 如果遺漏了關鍵日志,斷點bean的建立流程
  • 工具arthas(dashboard、thread...)
shardingsphere-4.1.1-項目啟動緩慢之中繼資料加載流程分析
  • idea的debugger檢視線程
shardingsphere-4.1.1-項目啟動緩慢之中繼資料加載流程分析
  • 其他性能檢測工具......

SchemaMetaDataLoader類究竟在幹什麼?為何要加載庫中所有的表?下面分析下shardingsphere在啟動時做了哪些操作。

  1. 分析
  • 檢視shardingsphere的字段裝配類:SpringBootConfiguration (隻粘貼部分代碼)
shardingsphere-4.1.1-項目啟動緩慢之中繼資料加載流程分析
@Configuration
@ComponentScan("org.apache.shardingsphere.spring.boot.converter")
@EnableConfigurationProperties({
        SpringBootShardingRuleConfigurationProperties.class,
        SpringBootMasterSlaveRuleConfigurationProperties.class, SpringBootEncryptRuleConfigurationProperties.class,
        SpringBootPropertiesConfigurationProperties.class, SpringBootShadowRuleConfigurationProperties.class})
@ConditionalOnProperty(prefix = "spring.shardingsphere", name = "enabled", havingValue = "true", matchIfMissing = true)
@AutoConfigureBefore(DataSourceAutoConfiguration.class)
@RequiredArgsConstructor
// 實作EnvironmentAware接口(spring的擴充點),會在初始化該類時調用setEnvironment方法
public class SpringBootConfiguration implements EnvironmentAware {
    
    private final SpringBootShardingRuleConfigurationProperties shardingRule;
    
    private final SpringBootMasterSlaveRuleConfigurationProperties masterSlaveRule;
    
    private final SpringBootEncryptRuleConfigurationProperties encryptRule;
    
    private final SpringBootShadowRuleConfigurationProperties shadowRule;
    
    private final SpringBootPropertiesConfigurationProperties props;
    
    private final Map<String, DataSource> dataSourceMap = new LinkedHashMap<>();
    
    private final String jndiName = "jndi-name";
    
// 項目中未采用讀寫分離,滿足該條件,下面看下shardingDataSource是如何建立的
    @Bean
    @Conditional(ShardingRuleCondition.class)
    public DataSource shardingDataSource() throws SQLException {
        return ShardingDataSourceFactory.createDataSource(dataSourceMap, new ShardingRuleConfigurationYamlSwapper().swap(shardingRule), props.getProps());
    }
    
    @Bean
    @Conditional(MasterSlaveRuleCondition.class)
    public DataSource masterSlaveDataSource() throws SQLException {
        return MasterSlaveDataSourceFactory.createDataSource(dataSourceMap, new MasterSlaveRuleConfigurationYamlSwapper().swap(masterSlaveRule), props.getProps());
    }
    
    @Bean
    @Conditional(EncryptRuleCondition.class)
    public DataSource encryptDataSource() throws SQLException {
        return EncryptDataSourceFactory.createDataSource(dataSourceMap.values().iterator().next(), new EncryptRuleConfigurationYamlSwapper().swap(encryptRule), props.getProps());
    }
    
    @Bean
    @Conditional(ShadowRuleCondition.class)
    public DataSource shadowDataSource() throws SQLException {
        return ShadowDataSourceFactory.createDataSource(dataSourceMap, new ShadowRuleConfigurationYamlSwapper().swap(shadowRule), props.getProps());
    }
    
    @Bean
    public ShardingTransactionTypeScanner shardingTransactionTypeScanner() {
        return new ShardingTransactionTypeScanner();
    }
    
    @Override
    public final void setEnvironment(final Environment environment) {
        String prefix = "spring.shardingsphere.datasource.";
        for (String each : getDataSourceNames(environment, prefix)) {
            try {
                dataSourceMap.put(each, getDataSource(environment, prefix, each));
            } catch (final ReflectiveOperationException ex) {
                throw new ShardingSphereException("Can't find datasource type!", ex);
            } catch (final NamingException namingEx) {
                throw new ShardingSphereException("Can't find JNDI datasource!", namingEx);
            }
        }
    }
    
    private List<String> getDataSourceNames(final Environment environment, final String prefix) {
        StandardEnvironment standardEnv = (StandardEnvironment) environment;
        standardEnv.setIgnoreUnresolvableNestedPlaceholders(true);
        return null == standardEnv.getProperty(prefix + "name")
                ? new InlineExpressionParser(standardEnv.getProperty(prefix + "names")).splitAndEvaluate() : Collections.singletonList(standardEnv.getProperty(prefix + "name"));
    }
    
    @SuppressWarnings("unchecked")
    private DataSource getDataSource(final Environment environment, final String prefix, final String dataSourceName) throws ReflectiveOperationException, NamingException {
        Map<String, Object> dataSourceProps = PropertyUtil.handle(environment, prefix + dataSourceName.trim(), Map.class);
        Preconditions.checkState(!dataSourceProps.isEmpty(), "Wrong datasource properties!");
        if (dataSourceProps.containsKey(jndiName)) {
            return getJndiDataSource(dataSourceProps.get(jndiName).toString());
        }
        DataSource result = DataSourceUtil.getDataSource(dataSourceProps.get("type").toString(), dataSourceProps);
        DataSourcePropertiesSetterHolder.getDataSourcePropertiesSetterByType(dataSourceProps.get("type").toString()).ifPresent(
            dataSourcePropertiesSetter -> dataSourcePropertiesSetter.propertiesSet(environment, prefix, dataSourceName, result));
        return result;
    }
    
    private DataSource getJndiDataSource(final String jndiName) throws NamingException {
        JndiObjectFactoryBean bean = new JndiObjectFactoryBean();
        bean.setResourceRef(true);
        bean.setJndiName(jndiName);
        bean.setProxyInterface(DataSource.class);
        bean.afterPropertiesSet();
        return (DataSource) bean.getObject();
    }
}           
  • ShardingDataSourceFactory#createDataSource
public ShardingDataSource(final Map<String, DataSource> dataSourceMap,
                         final ShardingRule shardingRule,
                         final Properties props) throws SQLException {
        super(dataSourceMap);
        checkDataSourceType(dataSourceMap);
        runtimeContext = new ShardingRuntimeContext(dataSourceMap, shardingRule, props, getDatabaseType());
    }           
  • ShardingRuntimeContext(shardingsphere上下文-類比于spring中的applicationContext)
public ShardingRuntimeContext(final Map<String, DataSource> dataSourceMap,
                             final ShardingRule shardingRule,
                             final Properties props,
                             final DatabaseType databaseType) throws SQLException {
        super(dataSourceMap, shardingRule, props, databaseType);
        cachedDatabaseMetaData = createCachedDatabaseMetaData(dataSourceMap);
        shardingTransactionManagerEngine = new ShardingTransactionManagerEngine();
        shardingTransactionManagerEngine.init(databaseType, dataSourceMap);
    }
 
// 調用父類方法加載中繼資料
 protected MultipleDataSourcesRuntimeContext(final Map<String, DataSource> dataSourceMap, 
                                            final T rule, 
                                            final Properties props, 
                                            final DatabaseType databaseType) {
        super(rule, props, databaseType);
        metaData = createMetaData(dataSourceMap, databaseType);
    }
 
// 加載中繼資料(加載完成之後會有耗時日志輸出:Meta data load finished, cost....)
private ShardingSphereMetaData createMetaData(final Map<String, DataSource> dataSourceMap, 
                                             final DatabaseType databaseType) throws SQLException {
        long start = System.currentTimeMillis();
  			// 資料源中繼資料
        DataSourceMetas dataSourceMetas = new DataSourceMetas(databaseType,getDatabaseAccessConfigurationMap(dataSourceMap));
        //  加載表中繼資料
        SchemaMetaData schemaMetaData = loadSchemaMetaData(dataSourceMap);
        // DataSourceMetas和SchemaMetaData共同組成ShardingSphereMetaData
        ShardingSphereMetaData result = new ShardingSphereMetaData(dataSourceMetas, schemaMetaData);
        // 中繼資料加載完成之後,會輸出耗時日志      
  log.info("Meta data load finished, cost {} milliseconds.", System.currentTimeMillis() - start);
        return result;
    }           
  • loadSchemaMetaData(dataSourceMap)
protected SchemaMetaData loadSchemaMetaData(final Map<String, DataSource> dataSourceMap) throws SQLException {
        // 擷取配置的max.connections.size.per.query參數值,預設值是:1
        int maxConnectionsSizePerQuery = getProperties().<Integer>getValue(ConfigurationPropertyKey.MAX_CONNECTIONS_SIZE_PER_QUERY);
        boolean isCheckingMetaData = getProperties().<Boolean>getValue(ConfigurationPropertyKey.CHECK_TABLE_METADATA_ENABLED);
        // ShardingMetaDataLoader.load方法,加載中繼資料
        SchemaMetaData result = new ShardingMetaDataLoader(dataSourceMap, getRule(), maxConnectionsSizePerQuery, isCheckingMetaData)
          .load(getDatabaseType());
   			// 對表中繼資料的中,列中繼資料、索引中繼資料做一些裝飾,不詳細展開
        result = SchemaMetaDataDecorator.decorate(result, getRule(), new ShardingTableMetaDataDecorator());
        if (!getRule().getEncryptRule().getEncryptTableNames().isEmpty()) {
            result = SchemaMetaDataDecorator.decorate(result, getRule().getEncryptRule(), new EncryptTableMetaDataDecorator());
        }
        return result;
   }

   public SchemaMetaData load(final DatabaseType databaseType) throws SQLException {
         // 1、根據分片規則加載中繼資料資訊
        SchemaMetaData result = loadShardingSchemaMetaData(databaseType);
         // 2、加載預設schema的中繼資料資訊【此處耗時嚴重,加載了庫中所有的表】
        result.merge(loadDefaultSchemaMetaData(databaseType));
        return result;
    }

  	// 1、根據分片規則加載中繼資料資訊
    private SchemaMetaData loadShardingSchemaMetaData(final DatabaseType databaseType) throws SQLException {
        log.info("Loading {} logic tables' meta data.", shardingRule.getTableRules().size());
        Map<String, TableMetaData> tableMetaDataMap = new HashMap<>(shardingRule.getTableRules().size(), 1);
        // 周遊分片規則,加載中繼資料
        for (TableRule each : shardingRule.getTableRules()) {
            tableMetaDataMap.put(each.getLogicTable(), load(each.getLogicTable(), databaseType));
        }
        return new SchemaMetaData(tableMetaDataMap);
    }

    // 2、加載預設schema的中繼資料資訊【此處耗時嚴重,加載了庫中所有的表】
    private SchemaMetaData loadDefaultSchemaMetaData(final DatabaseType databaseType) throws SQLException {
         //   找到預設資料源【注意該方法是如何查找的-重要】
         Optional<String> actualDefaultDataSourceName = shardingRule.findActualDefaultDataSourceName();
         // 如果預設資料源存在,則加載;否則傳回空的SchemaMetaData
         return actualDefaultDataSourceName.isPresent()
                             // 後面詳細分析加載流程【重要】
                              ? SchemaMetaDataLoader.load(dataSourceMap.get(actualDefaultDataSourceName.get()), maxConnectionsSizePerQuery, databaseType.getName())
                              : new SchemaMetaData(Collections.emptyMap());
       }           
  • shardingRule.findActualDefaultDataSourceName();
public Optional<String> findActualDefaultDataSourceName() {
        // 擷取預設資料源
        String defaultDataSourceName = shardingDataSourceNames.getDefaultDataSourceName();
        if (Strings.isNullOrEmpty(defaultDataSourceName)) {
            return Optional.empty();
        }
        Optional<String> masterDefaultDataSourceName = findMasterDataSourceName(defaultDataSourceName);
        return masterDefaultDataSourceName.isPresent() ? masterDefaultDataSourceName : Optional.of(defaultDataSourceName);
    }

   // 如果dataSourceNames隻配置了1個,則擷取配置的這個;否則傳回配置的defaultDataSourceName【項目中如果沒有配置,則傳回空】
   // 我們項目中隻配置了1個【沒有分庫,隻分表】 
  public String getDefaultDataSourceName() {
        return 1 == dataSourceNames.size() ? dataSourceNames.iterator().next() : shardingRuleConfig.getDefaultDataSourceName();
    }           
  • ShardingMetaDataLoader#load(重要)
public static SchemaMetaData load(final DataSource dataSource, final int maxConnectionCount, final String databaseType) throws SQLException {
        List<String> tableNames;
        try (Connection connection = dataSource.getConnection()) {
            // 首先擷取資料庫中【所有的表】
            tableNames = loadAllTableNames(connection, databaseType);
        }
        log.info("Loading {} tables' meta data.", tableNames.size());
        if (0 == tableNames.size()) {
            return new SchemaMetaData(Collections.emptyMap());
        }
        // maxConnectionCount就是前文提到的max.connections.size.per.query(預設值是:1)
        // max.connections.size.per.query參與了分組,因為預設值是:1,是以tableGroups.size() = 1
        // 此次我們可以調大該值,走下面的異步加載流程【注意不要超過資料庫連接配接池的最大配置】
        List<List<String>> tableGroups = Lists.partition(tableNames, Math.max(tableNames.size() / maxConnectionCount, 1));
        Map<String, TableMetaData> tableMetaDataMap = 
                1 == tableGroups.size()
                // tableGroups.size()為1,同步加載
                ? load(dataSource.getConnection(), tableGroups.get(0), databaseType)
                // 否則,異步加載
                : asyncLoad(dataSource, maxConnectionCount, tableNames, tableGroups, databaseType);
        return new SchemaMetaData(tableMetaDataMap);
    }

// 同步加載
private static Map<String, TableMetaData> load(final Connection connection, final Collection<String> tables, final String databaseType) throws SQLException {
        try (Connection con = connection) {
            Map<String, TableMetaData> result = new LinkedHashMap<>();
            for (String each : tables) {
               // 加載列中繼資料、和索引中繼資料
                result.put(each, new TableMetaData(ColumnMetaDataLoader.load(con, each, databaseType), IndexMetaDataLoader.load(con, each, databaseType)));
            }
            return result;
        }
    }

// 異步加載
private static Map<String, TableMetaData> asyncLoad(final DataSource dataSource, final int maxConnectionCount, final List<String> tableNames,
                                                        final List<List<String>> tableGroups, final String databaseType) throws SQLException {
        Map<String, TableMetaData> result = new ConcurrentHashMap<>(tableNames.size(), 1);
        // 開啟線程池
        ExecutorService executorService = Executors.newFixedThreadPool(Math.min(tableGroups.size(), maxConnectionCount));
        Collection<Future<Map<String, TableMetaData>>> futures = new LinkedList<>();
        for (List<String> each : tableGroups) {
            futures.add(executorService.submit(() -> load(dataSource.getConnection(), each, databaseType)));
        }
        for (Future<Map<String, TableMetaData>> each : futures) {
            try {
                // 異步變同步
                result.putAll(each.get());
            } catch (final InterruptedException | ExecutionException ex) {
                if (ex.getCause() instanceof SQLException) {
                    throw (SQLException) ex.getCause();
                }
                Thread.currentThread().interrupt();
            }
        }
        return result;
    }           
  • ColumnMetaDataLoader.load
public static Collection<ColumnMetaData> load(final Connection connection, final String table, final String databaseType) throws SQLException {
        if (!isTableExist(connection, connection.getCatalog(), table, databaseType)) {
            return Collections.emptyList();
        }
        Collection<ColumnMetaData> result = new LinkedList<>();
        Collection<String> primaryKeys = loadPrimaryKeys(connection, table, databaseType);
        List<String> columnNames = new ArrayList<>();
        List<Integer> columnTypes = new ArrayList<>();
        List<String> columnTypeNames = new ArrayList<>();
        List<Boolean> isPrimaryKeys = new ArrayList<>();
        List<Boolean> isCaseSensitives = new ArrayList<>();
        try (ResultSet resultSet = connection.getMetaData().getColumns(connection.getCatalog(), JdbcUtil.getSchema(connection, databaseType), table, "%")) {
            while (resultSet.next()) {
                String columnName = resultSet.getString(COLUMN_NAME);
                columnTypes.add(resultSet.getInt(DATA_TYPE));
                columnTypeNames.add(resultSet.getString(TYPE_NAME));
                isPrimaryKeys.add(primaryKeys.contains(columnName));
                columnNames.add(columnName);
            }
        }
        try (ResultSet resultSet = connection.createStatement().executeQuery(generateEmptyResultSQL(table, databaseType))) {
            for (String each : columnNames) {
                isCaseSensitives.add(resultSet.getMetaData().isCaseSensitive(resultSet.findColumn(each)));
            }
        }
        for (int i = 0; i < columnNames.size(); i++) {
            // TODO load auto generated from database meta data
            result.add(new ColumnMetaData(columnNames.get(i), columnTypes.get(i), columnTypeNames.get(i), isPrimaryKeys.get(i), false, isCaseSensitives.get(i)));
        }
        return result;
    }           
  • IndexMetaDataLoader.load
public static Collection<IndexMetaData> load(final Connection connection, final String table, final String databaseType) throws SQLException {
        Collection<IndexMetaData> result = new HashSet<>();
        try (ResultSet resultSet = connection.getMetaData().getIndexInfo(connection.getCatalog(), JdbcUtil.getSchema(connection, databaseType), table, false, false)) {
            while (resultSet.next()) {
                String indexName = resultSet.getString(INDEX_NAME);
                if (null != indexName) {
                    result.add(new IndexMetaData(indexName));
                }
            }
        }
        return result;
    }           
  • 列中繼資料、和索引中繼資料一覽
// 列中繼資料
public class ColumnMetaData {
     // 列名
    private final String name;
     // 類型
    private final int dataType;
     // 類型名稱
    private final String dataTypeName;
     // 是否是主鍵
    private final boolean primaryKey;
     // 是否自動生成
    private final boolean generated;
     // 是否大小寫敏感
    private final boolean caseSensitive;
}
 
// 索引中繼資料
public final class IndexMetaData {
     // 索引名稱
    private final String name;
}           
shardingsphere-4.1.1-項目啟動緩慢之中繼資料加載流程分析

三、如何解決中繼資料加載耗時問題

  1. 調大max.connections.size.per.query,注意不要超過資料庫連接配接池的最大配置
  2. 配置兩個資料源,資料源2和資料源1連接配接資訊保持一緻【僅僅配置資料源2,但實際不使用資料源2】
  3. 采用分庫分表
  4. 更新版本到5.x【5.x版本對中繼資料的加載做了優化:多線程加載,且相同分表隻加載一個】

繼續閱讀