天天看點

ARTS Week 2AlgorithmReviewTip

ARTS Week 2

  • Algorithm
    • [Add Two Numbers](https://leetcode.com/problems/add-two-numbers/)
  • Review
    • [An introduction to Kafka](https://developer.ibm.com/articles/ba-kafka-intro/)
  • Tip
    • count(1)與count(*)誰更高效
  • Share
    • [Best Performance Practices for Hibernate 5 and Spring Boot 2 (Part 2)](https://dzone.com/articles/best-performance-practices-for-hibernate-5-and-spr)

Algorithm

Add Two Numbers

import com.tntcpu.leetcode.utils.ListNode;

public class AddTwoNumbers_2_Best {

    public static void main(String[] args) {
        ListNode l1 = new ListNode(0);
        l1.next = new ListNode(1);
        ListNode l2 = new ListNode(0);
        l2.next = new ListNode(1);
        l2.next.next = new ListNode(2);

        ListNode l3 = addTwoNumbers(l1, l2);
        int val = l3.val;
        int val1 = l3.next.val;
        int val2 = l3.next.next.val;
        System.out.println(val);
        System.out.println(val1);
        System.out.println(val2);
    }

    private static ListNode addTwoNumbers(ListNode l1, ListNode l2) {
        ListNode dummyHead = new ListNode(0);
        ListNode p = l1, q = l2, curr = dummyHead;
        int carry = 0;
        while (p != null || q != null) {
            int x = (p != null) ? p.val : 0;
            int y = (q != null) ? q.val : 0;
            int sum = carry + x + y;
            carry = sum / 10;
            curr.next = new ListNode(sum % 10);
            curr = curr.next;
            if (p != null) p = p.next;
            if (q != null) q = q.next;
        }
        if (carry > 0) {
            curr.next = new ListNode(carry);
        }
        return dummyHead.next;
    }
}

           

Review

An introduction to Kafka

Summary:

  1. A few basic components in Kafka are:

    1.1 Broker: A Kafka broker is where the data sent to Kafka is stored.

    1.2 Producer: A producer is an entity that sends data to the broker.

    1.3 Consumer: A consumer is an entity that requests data from the broker.

  2. Kafka stores data in topics. Producers send data to specific Kafka topics, and consumers read data also from specific topics.
  3. There are a few reasons for the continued popularity and adoption of Kafka in the industry:

    3.1 Scalability: Two important features of Kafka contribute to its scalability.

    3.2 Fault tolerance and reliability: Kafka is designed in a way that a broker failure is detectable by other brokers in the cluster.

    3.3 Throughput: Brokers can store and retrieve data efficiently and at a super-fast speed.

Tip

Share

Best Performance Practices for Hibernate 5 and Spring Boot 2 (Part 2)

繼續閱讀