天天看點

ARTS Week 39ARTS Week 39

ARTS Week 39

ARTS Week 39ARTS Week 39
凡事過往,皆為序章

Algoithm

從字元串生成二叉樹

概述

你需要從一個包括括号和整數的字元串建構一棵二叉樹。

輸入的字元串代表一棵二叉樹。它包括整數和随後的 0 ,1 或 2 對括号。整數代表根的值,一對括号内表示同樣結構的子樹。

若存在左子結點,則從左子結點開始建構。

示例:

輸入:"4(2(3)(1))(6(5))"
輸出:傳回代表下列二叉樹的根節點:

       4
     /   \
    2     6
   / \   /
  3   1 5
           

提示:

輸入字元串中隻包含’(’, ‘)’, '-‘和’0’ ~ ‘9’

空樹由"“而非”()"表示。

分析

二叉樹的題目,總體按照先序/中序/後序 方式進行遞歸生成。

  1. 首先判斷終止條件:
    1. 如果為 “” ,則為 root
    2. 如果沒有 “(”,那麼說明完整組成 root節點
  2. 進行左右子樹分割
    1. 按照格"()" 進行切割
    2. 如果組合為0,且開始節點保持一緻,那麼就說明是左子樹,否則為右子樹。
  3. 遞歸生成

code

# Definition for a binary tree node.
class TreeNode:
    def __init__(self, val=0, left=None, right=None):
        self.val = val
        self.left = left
        self.right = right


class Solution:
    def str2tree(self, s: str) -> TreeNode:
        root = self.dfs(s)
        return root

    def dfs(self, s):
        if len(s) == 0:
            return None
        if '(' not in s:
            return TreeNode(int(s[:]))
        pos = s.index('(')
        root = TreeNode(int(s[0: pos]))
        start = pos  # (start是某棵子樹,區間的起始index,第一個左括号)
        cnt = 0
        for i in range(pos, len(s)):
            if s[i] == '(':
                cnt += 1
            elif s[i] == ')':
                cnt -= 1

            if cnt == 0 and start == pos:  # 左子的部分
                root.left = self.str2tree(s[start + 1: i])
                start = i + 1
            elif cnt == 0 and start != pos:  # 右子的部分 這個地方必須用 elif!!!!!!
                root.right = self.str2tree(s[start + 1: i])

        return root

           

Review

[Tutorial, Part 1] How to develop Go gRPC microservice with HTTP/REST endpoint, middleware, Kubernetes deployment, etc.

概述

I want to provide step by step tutorial how to develop simple CRUD “To Do list” microservice with gRPC and HTTP/REST

endpoints. I demonstrate how to write tests and add middleware (request-ID and logging/tracing) to microservice also. I

provide example how to build and deploy this microservice to Kubernetes at the end.

Table of Content

  1. about how to create gRPC CRUD service and client
  2. about how to add HTTP/REST endpoint to the gRPC service
  3. about how to add middleware (e.g. logging/tracing) to gRPC service and HTTP/REST endpoint as well
  4. going to be dedicated how to add Kubernetes deployment configuration with health check and how to build and deploy

    project to Google Cloud

create gRPC CRUD service

  1. https://github.com/amsokol/go-grpc-http-rest-microservice-tutorial/tree/part1
  2. https://github.com/golang-standards/project-layout

Tip

推薦閱讀:分布式系統架構經典資料

概述

分布式架構的相關資料彙總

基礎概念

  • CAP 定理
  • Fallacies of Distributed Computing

經典資料

  • Distributed systems theory for the distributed systems engineer
  • FLP Impossibility Result
  • An introduction to distributed systems
  • Distributed Systems for fun and profit
  • Distributed Systems: Principles and Paradigms
  • Scalable Web Architecture and Distributed Systems
  • Principles of Distributed Systems
  • Making reliable distributed systems in the presence of software errors
  • Designing Data Intensive Applications

CAP 定理

CAP 定理是分布式系統設計中最基礎,也是最為關鍵的理論。它指出,分布式資料存儲不可能同時滿足以下三個條件。

  • 一緻性(Consistency):每次讀取要麼獲得最近寫入的資料,要麼獲得一個錯誤。
  • 可用性(Availability):每次請求都能獲得一個(非錯誤)響應,但不保證傳回的是最新寫入的資料。
  • 分區容忍(Partition tolerance):盡管任意數量的消息被節點間的網絡丢失(或延遲),系統仍繼續運作。
ARTS Week 39ARTS Week 39

錯誤假設

  1. 網絡是穩定的。網絡傳輸的延遲是零。
  2. 網絡的帶寬是無窮大。網絡是安全的。
  3. 網絡的拓撲不會改變。
  4. 隻有一個系統管理者。
  5. 傳輸資料的成本為零。
  6. 整個網絡是同構的。

資料總結

Distributed systems theory for the distributed systems engineer

Distributed Systems for Fun and Profit

這是一本小書,涵蓋了分布式系統中的關鍵問題,包括時間的作用和不同的複制政策。後文中對這本書有較詳細的介紹。

Notes on distributed systems for young bloods

,這篇文章中沒有理論,是一份适合新手閱讀的分布式系統實踐筆記。

A Note on Distributed Systems

,這是一篇經典的論文,講述了為什麼在分布式系統中,遠端互動不能像本地對象那樣進行。

The fallacies of distributed computing

,每個分布式系統新手都會做的 8 個錯誤假設,并探讨了其會帶來的影響。上文中專門對這篇文章做了介紹。

Share

ORM簡單的模闆-Python

概述

整理了部分基于 ORM架構的同步/異步 相關組建。

PSql

import os
import asyncio
import inspect
import functools
import collections

import orjson
from blinker import signal
from sqlalchemy import exc, event, create_engine
from sqlalchemy.engine.base import Engine as SQLAlchemyEngine
from sqlalchemy.orm import UOWTransaction, sessionmaker, scoped_session
from sqlalchemy.pool import NullPool
from sqlalchemy.ext.asyncio import AsyncSession as _AsyncSession
from sqlalchemy.ext.asyncio import create_async_engine, async_scoped_session

# 相關的設定
from settings import DB_DSN, DB_POOL_SIZE, DB_MAX_OVERFLOW, DB_POOL_RECYCLE

engine: SQLAlchemyEngine = create_engine(
    DB_DSN,
    pool_size=DB_POOL_SIZE,
    max_overflow=DB_MAX_OVERFLOW,
    pool_recycle=DB_POOL_RECYCLE,
    json_serializer=lambda obj: orjson.dumps(obj).decode(),
    json_deserializer=lambda obj: orjson.loads(obj),
)

# Constructs a scoped DBSession.
Session = scoped_session(
    sessionmaker(engine, expire_on_commit=False, future=True),
)


@event.listens_for(engine, "connect")
def connect(dbapi_connection, connection_record):
    connection_record.info["pid"] = os.getpid()


@event.listens_for(engine, "checkout")
def checkout(dbapi_connection, connection_record, connection_proxy):
    pid = os.getpid()
    if connection_record.info["pid"] != pid:
        connection_record.connection = connection_proxy.connection = None
        raise exc.DisconnectionError(
            "Connection record belongs to pid %s, "
            "attempting to check out in pid %s" % (connection_record.info["pid"], pid)
        )


@event.listens_for(Session, "after_flush")
def after_flush(session, flush_context: UOWTransaction):
    if not hasattr(session, "g_changed_track_info"):
        session.g_changed_track_info = collections.defaultdict(set)

    for instances in flush_context.mappers.values():
        for instance in instances:
            session.g_changed_track_info[instance.class_.__name__].add(
                instance.identity
            )


@event.listens_for(Session, "after_commit")
def after_commit(session):
    if not hasattr(session, "g_changed_track_info"):
        return

    for model_name, primary_keys in session.g_changed_track_info.items():
        signal(model_name).send(primary_keys)
    delattr(session, "g_changed_track_info")


async_engine = create_async_engine(
    DB_DSN.replace("psycopg2", "asyncpg"),
    json_serializer=lambda obj: orjson.dumps(obj).decode(),
    json_deserializer=lambda obj: orjson.loads(obj),
    poolclass=NullPool,
)

_session_factory = sessionmaker(
    async_engine, expire_on_commit=False, class_=_AsyncSession
)


def _hooked_session_maker():
    async_session = _session_factory()

    @event.listens_for(async_session.sync_session, "after_flush")
    def _after_flush(session, flush_context: UOWTransaction):
        if not hasattr(session, "g_changed_track_info"):
            session.g_changed_track_info = collections.defaultdict(set)

        for instances in flush_context.mappers.values():
            for instance in instances:
                session.g_changed_track_info[instance.class_.__name__].add(
                    instance.identity
                )

    @event.listens_for(async_session.sync_session, "after_commit")
    def _after_commit(session):
        if not hasattr(session, "g_changed_track_info"):
            return

        for model_name, primary_keys in session.g_changed_track_info.items():
            signal(model_name).send(primary_keys)
        delattr(session, "g_changed_track_info")

    return async_session


AsyncSession = async_scoped_session(
    _hooked_session_maker,
    scopefunc=asyncio.current_task,
)


def session_scope(fn):
    """Defines session top level life time, session will be closed after scope ends."""

    if inspect.iscoroutinefunction(fn):

        @functools.wraps(fn)
        async def wrapper_decorator(*args, **kwargs):
            async with AsyncSession():
                return await fn(*args, **kwargs)

    else:

        @functools.wraps(fn)
        def wrapper_decorator(*args, **kwargs):
            with Session():
                return fn(*args, **kwargs)

    return wrapper_decorator


def commit_scope(fn):
    """Defines transaction top level life time, force begin /close transaction."""

    if inspect.iscoroutinefunction(fn):

        @functools.wraps(fn)
        async def wrapper_decorator(*args, **kwargs):
            session = AsyncSession()
            try:
                res = await fn(*args, **kwargs)
                await session.commit()
                return res

            except Exception:
                await session.rollback()
                raise

    else:

        @functools.wraps(fn)
        def wrapper_decorator(*args, **kwargs):
            session = Session()
            try:
                res = fn(*args, **kwargs)
                session.commit()
                return res
            except Exception:
                session.rollback()
                raise

    return wrapper_decorator

           
上一篇: ARTS Week 4