zhjh256

为什么需要分布式数据库

  有很多原因数据库需要扩展性。1、请求需要访问的数据量过大(单纯的数据量大不是理由,例如从不访问,归档即可);2、服务器CPU、内存、网络、IO到了瓶颈,响应时间大大下降;3、MPP中,集中式数据库在设计时通常为了开发人员使用更加顺畅和丝滑,尽可能的让数据库设计和SQL非常简单,比如不需要指定某些表实际上是存在主外键关系,从而导致并行执行效果打折;或者并行执行在一开始并不包含,后面逐渐增强,导致并行执行有天然的缺陷,分区亦如此。这三通常是根本原因。

Citus介绍

  首先提供了比较公正参考的是citus中国写的一篇文章PG-XL,Citus,GreenPlum如何选择,不同于其他分布式数据库如tidb、oceanbase、tdsql等每家都说自己最厉害,citus还是比较客观的分析,它自己比较适合于OLTP下的分布式,并不适合于大规模的adhoc场景。实际上很多系统数据量积累的比较大、同时业务流程又很复杂,但是TPS不高,这些系统要应用于分布式数据库,物理模型的设计是很重要的,不是简单的库能套上去的,例如数据中心、历史库的包含就很重要了。亦或者上去了就下不来了,维护成本会非常高。

  和其他分布式架构一样,citus也采用协调者和工作者节点,也可以认为是master和worker,说计算和存储分离是不合适的(大多数分布式数据库如oceanbase、goldendb、TDSQL自称计算和存储分离也是不合适的)。真正接近了存储和计算分离的是oracle exadata、tidb。如下所示,协调者和工作者一样都是postgresql实例。

  

  SQL语句经过语法解析后,在协调者节点的analyze阶段被citus扩展(和greenplum、xl、xc不同的是,citus采用的是extension机制(pg定义了大量hook供各种extension访问,具体可参见postgresql内核开发必备之extension机制))替换,并进行SQL语句的fork and join过程。得益于extension这一点,你可以认为citus本质上和greenplum、xl以及xc在事务、语法语义等数据库本身特性的支持上是差不多的。而不是三方中间件如pgpool、pgbouncer中的硬塞实现。因此具有更好的一致性和稳定性保证。

  在分布式事务的实现上,citus也是采用2PC协议。它的实现可以参考http://citusdb.cn/?p=661

  注:citus架构的优点在于,它认为分布式是一个特性,而不是属性。这一点LZ在所有场合都是这么坚信,95%+的系统永远都不需要微服务架构,数据库也不需要分布式,因为其到不了那个容量,所以理论上可以扩展使得应用能够同时运行于单实例和分布式,而其它一开始就设计为分布式的数据库是很难的。

  因为没有做单独GTM节点的概念,citus无法的协调者无法实现多活,这种情况下容易出现协调者单点,如下:

  

   对此,Citus还提供了两个参数use_secondary_node和writable_standby_coordinator以支持写入能力扩展及数据节点读写分离。这样standby cn也可以执行查询和DML操作。如下所示:

   由此可见,可靠的分布式数据库架构是非常复杂的,如果没有非常一体化的监控管理平台,其维护难度可想而知。

 参考:https://blog.csdn.net/weixin_46199817/article/details/117223870

Citus安装

  可以从https://github.com/citusdata/citus下载源码或rpm,一般用户可以选择yum install citus101_13-10.1.1.citus-1.el7.x86_64。

[zjh@lightdb1 usr]$ rpm -ql postgresql13-13.3-1PGDG.rhel7.x86_64
/usr/pgsql-13/bin/clusterdb
/usr/pgsql-13/bin/createdb
/usr/pgsql-13/bin/createuser
/usr/pgsql-13/bin/dropdb
/usr/pgsql-13/bin/dropuser
/usr/pgsql-13/bin/pg_basebackup
/usr/pgsql-13/bin/pg_config
/usr/pgsql-13/bin/pg_dump
/usr/pgsql-13/bin/pg_dumpall


[zjh@lightdb1 usr]$ rpm -qa | grep citus
citus_13-10.0.3-1.rhel7.x86_64
r[zjh@lightdb1 usr]$ rpm -ql citus_13-10.0.3-1.rhel7.x86_64
/usr/pgsql-13/doc/extension/README-citus.md
/usr/pgsql-13/lib/citus.so
/usr/pgsql-13/share/extension/citus--10.0-1--10.0-2.sql
/usr/pgsql-13/share/extension/citus--10.0-2--10.0-3.sql
/usr/pgsql-13/share/extension/citus--8.0-1--8.0-2.sql
/usr/pgsql-13/share/extension/citus--8.0-1.sql
/usr/pgsql-13/share/extension/citus--8.0-10--8.0-11.sql
/usr/pgsql-13/share/extension/citus--8.0-11--8.0-12.sql
/usr/pgsql-13/share/extension/citus--8.0-12--8.0-13.sql
/usr/pgsql-13/share/extension/citus--8.0-13--8.1-1.sql
/usr/pgsql-13/share/extension/citus--8.0-2--8.0-3.sql
/usr/pgsql-13/share/extension/citus--8.0-3--8.0-4.sql
/usr/pgsql-13/share/extension/citus--8.0-4--8.0-5.sql
/usr/pgsql-13/share/extension/citus--8.0-5--8.0-6.sql
/usr/pgsql-13/share/extension/citus--8.0-6--8.0-7.sql
/usr/pgsql-13/share/extension/citus--8.0-7--8.0-8.sql
/usr/pgsql-13/share/extension/citus--8.0-8--8.0-9.sql
/usr/pgsql-13/share/extension/citus--8.0-9--8.0-10.sql
/usr/pgsql-13/share/extension/citus--8.1-1--8.2-1.sql
/usr/pgsql-13/share/extension/citus--8.2-1--8.2-2.sql
/usr/pgsql-13/share/extension/citus--8.2-2--8.2-3.sql
/usr/pgsql-13/share/extension/citus--8.2-3--8.2-4.sql
/usr/pgsql-13/share/extension/citus--8.2-4--8.3-1.sql
/usr/pgsql-13/share/extension/citus--8.3-1--9.0-1.sql
/usr/pgsql-13/share/extension/citus--9.0-1--9.0-2.sql
/usr/pgsql-13/share/extension/citus--9.0-2--9.1-1.sql
/usr/pgsql-13/share/extension/citus--9.1-1--9.2-1.sql
/usr/pgsql-13/share/extension/citus--9.2-1--9.2-2.sql
/usr/pgsql-13/share/extension/citus--9.2-2--9.2-4.sql
/usr/pgsql-13/share/extension/citus--9.2-4--9.3-2.sql
/usr/pgsql-13/share/extension/citus--9.3-1--9.2-4.sql
/usr/pgsql-13/share/extension/citus--9.3-2--9.4-1.sql
/usr/pgsql-13/share/extension/citus--9.4-1--9.5-1.sql
/usr/pgsql-13/share/extension/citus--9.5-1--10.0-1.sql
/usr/pgsql-13/share/extension/citus.control
/usr/share/doc/citus_13-10.0.3
/usr/share/doc/citus_13-10.0.3/CHANGELOG.md
/usr/share/licenses/citus_13-10.0.3
/usr/share/licenses/citus_13-10.0.3/LICENSE

  然后正常通过initdb创建postgresql数据库,1个CN,2个DN。

  如下:

[zjh@lightdb1 pgsql-13]$ ll
total 24
drwxr-xr-x  2 zjh zjh 4096 Jun  1 17:43 bin
drwx------ 21 zjh zjh 4096 Aug 29 00:00 coordinator_1
drwxr-xr-x  3 zjh zjh   23 Jun  1 17:43 doc
drwxr-xr-x  3 zjh zjh 4096 Jun 19 14:58 lib
drwxr-xr-x  7 zjh zjh 4096 Jun  1 17:43 share
drwx------ 21 zjh zjh 4096 Aug 29 00:00 worker_1_13588
drwx------ 21 zjh zjh 4096 Aug 29 00:00 worker_2_23588

   安装citus插件:  

-- CN和DN都要配置
shared_preload_libraries=\'citus\'  -- 第一个插件必须是citus
CREATE EXTENSION citus;  -- 安装在postgres用户下即可
SELECT * from citus_add_node(\'10.0.0.1\', 13588);
SELECT * from citus_add_node(\'10.0.0.1\', 23588);

  查询DN列表:

postgres=# SELECT * FROM citus_get_active_worker_nodes();
  node_name   | node_port 
--------------+-----------
 10.0.0.1 |     23588
 10.0.0.1 |     13588
(2 rows)

概念

  在citus中,分片和节点不是一对一关系,这一点不同于greenplum,更接近nosql如couchbase的设计,一定程度上这么做也避免了使用了citus之后还需要分区的必要性(这是个优点、也是个缺点,平衡的结果)。 

Citus表类型

  citus中表分三种类型,1:分库表(每个DN n个分片,分片数量可配置,一般是订单表和客户表);2:广播表(每个DN一份,CN不包括,一般是字典表、产品表、费率表、机构表、权限表等);3:全局表(仅存在于CN,一些系统参数表,统计表,也可能广播存储,看情况),全局表一般不会和广播表、分库表进行关联,默认CN创建表的时候就是local表,也可以通过SELECT undistribute_table(\'github_events\');将分库表切换回local表(此时会数据先迁移回来,也是缩容的一种方式)。

  广播表和分库表,广播表和广播表之间关联会很多。

  同时会存在多种业务存在于同一个数据库中的情况,例如库存和客户,操作日志和订单,小二和菜单、功能、客户,并且同时有从菜单维度查,也有从小二维度查。所以citus支持对表进行分组,相关分组的表,citus在生成分布式执行计划的时候就知道那些是相关的,哪些是无关的。如下:

  SELECT create_distributed_table(\'event\', \'tenant_id\');
  SELECT create_distributed_table(\'page\', \'tenant_id\', colocate_with => \'event\');

  分组的前提是两个表使用相同字段作为分片字段。分组可以使得SQL的优化更加进一步。

  总有一会儿,你会发现库存和客户表进行关联,通过订单进行的。这个时候库存是根据产品分片的,客户是通过客户id分片的。此时效果会怎么样呢?

  不同于greenplum支持distributed by语法,citus因为采用extension实现,没有扩展pg本身的语法,所以采用函数的方式来指定表是否为分布式表。

CREATE TABLE companies (
id bigserial PRIMARY KEY,
name text NOT NULL,
image_url text,
created_at timestamp without time zone NOT NULL,
updated_at timestamp without time zone NOT NULL
);

SELECT create_distributed_table(\'companies\', \'id\'); -- companies表为分布式表,id是用于分片的字段

  需要注意的是,citus分片数量和worker数量不是一一对应,这和gp不同,但类似于现在tidb、oceanbase的做法。如下:

 

 

 

   要创建广播表,可以使用create_reference_table函数:

SELECT create_reference_table(\'geo_ips\');  -- 所有worker节点广播,不包含CN

  大多数的DDL语句citus都支持,会负责分布式调用所有worker。

分布式数据库性能优化关键

  co-location是分布式数据库性能好坏和扩展性的关键。

如果实在无法实现colocation,哪怕通过lightdb逻辑复制,尽可能避免下列操作:
1、agg(distinct value2),带distinct的聚合函数
2、分析函数<window_function>() OVER (PARTITION BY <...> ORDER BY <...> <window_frame>)
3、任意字段排序和分页查询
3、对于这三种情况,gp的interconnect也无能为力,CN必然是瓶颈;
4、流式计算也不能解决to B端的问题,只能靠更加智能的算法和空间换时间;

自定义数据分布算法、副本数、分片数

 

Citus函数类型

  不管用户是否承认,相同的功能,存储过程和函数实现的效率就是要比应用发送SQL过来效率更高。所以citus支持了分布式函数的概念。

新增节点

  新增节点后,默认不会启用,需要调用rebalance_table_shards让citus对数据进行迁移,然后才会被访问。

  SELECT rebalance_table_shards(\'companies\');

执行计划分析

explain(analyze,verbose,buffers) select
    count(*) as low_stock
from
    (
    select
        s_w_id,
        s_i_id,
        s_quantity
    from
        bmsql_stock
    where
        s_w_id =     975
        and s_quantity < 12
        and s_i_id in (
        select
            ol_i_id
        from
            bmsql_district
        join bmsql_order_line on
            ol_w_id = d_w_id
            and ol_d_id = d_id
            and ol_o_id >= d_next_o_id - 20
            and ol_o_id < d_next_o_id
        where
            d_w_id =     975
            and d_id =    9 ) ) as L

QUERY PLAN                                                                                                                                                                                                                                                     |
---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
Custom Scan (Citus Adaptive)  (cost=0.00..0.00 rows=0 width=0) (actual time=9.781..9.782 rows=1 loops=1)                                                                                                                                                       |
  Output: remote_scan.low_stock                                                                                                                                                                                                                                |
  Task Count: 1                                                                                                                                                                                                                                                |
  Tuple data received from nodes: 1 bytes                                                                                                                                                                                                                      |
  Tasks Shown: All                                                                                                                                                                                                                                             |
  ->  Task                                                                                                                                                                                                                                                     |
        Query: SELECT count(*) AS low_stock FROM (SELECT bmsql_stock.s_w_id, bmsql_stock.s_i_id, bmsql_stock.s_quantity FROM public.bmsql_stock_103384 bmsql_stock WHERE ((bmsql_stock.s_w_id OPERATOR(pg_catalog.=) 975) AND (bmsql_stock.s_quantity OPERATOR(|
        Tuple data received from node: 1 bytes                                                                                                                                                                                                                 |
        Node: host=127.0.0.1 port=13588 dbname=postgres                                                                                                                                                                                                        |
        ->  Aggregate  (cost=25597.32..25597.33 rows=1 width=8) (actual time=1.276..1.277 rows=1 loops=1)                                                                                                                                                      |
              Output: count(*)                                                                                                                                                                                                                                 |
              Buffers: shared hit=810                                                                                                                                                                                                                          |
              ->  Nested Loop  (cost=7612.59..25597.14 rows=73 width=0) (actual time=0.389..1.272 rows=4 loops=1)                                                                                                                                              |
                    Inner Unique: true                                                                                                                                                                                                                         |
                    Buffers: shared hit=810                                                                                                                                                                                                                    |
                    ->  HashAggregate  (cost=7612.16..7646.24 rows=3408 width=4) (actual time=0.163..0.206 rows=186 loops=1)                                                                                                                                   |
                          Output: bmsql_order_line.ol_i_id                                                                                                                                                                                                     |
                          Group Key: bmsql_order_line.ol_i_id                                                                                                                                                                                                  |
                          Batches: 1  Memory Usage: 129kB                                                                                                                                                                                                      |
                          Buffers: shared hit=42                                                                                                                                                                                                               |
                          ->  Nested Loop  (cost=0.71..7603.64 rows=3408 width=4) (actual time=0.055..0.131 rows=189 loops=1)                                                                                                                                  |
                                Output: bmsql_order_line.ol_i_id                                                                                                                                                                                               |
                                Buffers: shared hit=42                                                                                                                                                                                                         |
                                ->  Index Scan using bmsql_district_pkey_103191 on public.bmsql_district_103191 bmsql_district  (cost=0.27..8.30 rows=1 width=12) (actual time=0.014..0.014 rows=1 loops=1)                                                    |
                                      Output: bmsql_district.d_w_id, bmsql_district.d_id, bmsql_district.d_ytd, bmsql_district.d_tax, bmsql_district.d_next_o_id, bmsql_district.d_name, bmsql_district.d_street_1, bmsql_district.d_street_2, bmsql_district.d|
                                      Index Cond: ((bmsql_district.d_w_id = 975) AND (bmsql_district.d_id = 9))                                                                                                                                                |
                                      Buffers: shared hit=3                                                                                                                                                                                                    |
                                ->  Index Scan using bmsql_order_line_pkey_103351 on public.bmsql_order_line_103351 bmsql_order_line  (cost=0.44..7561.26 rows=3408 width=16) (actual time=0.022..0.081 rows=189 loops=1)                                      |
                                      Output: bmsql_order_line.ol_w_id, bmsql_order_line.ol_d_id, bmsql_order_line.ol_o_id, bmsql_order_line.ol_number, bmsql_order_line.ol_i_id, bmsql_order_line.ol_delivery_d, bmsql_order_line.ol_amount, bmsql_order_line.|
                                      Index Cond: ((bmsql_order_line.ol_w_id = 975) AND (bmsql_order_line.ol_d_id = 9) AND (bmsql_order_line.ol_o_id >= (bmsql_district.d_next_o_id - 20)) AND (bmsql_order_line.ol_o_id < bmsql_district.d_next_o_id))        |
                                      Buffers: shared hit=39                                                                                                                                                                                                   |
                    ->  Index Scan using bmsql_stock_pkey_103384 on public.bmsql_stock_103384 bmsql_stock  (cost=0.43..5.27 rows=1 width=4) (actual time=0.006..0.006 rows=0 loops=186)                                                                        |
                          Output: bmsql_stock.s_w_id, bmsql_stock.s_i_id, bmsql_stock.s_quantity, bmsql_stock.s_ytd, bmsql_stock.s_order_cnt, bmsql_stock.s_remote_cnt, bmsql_stock.s_data, bmsql_stock.s_dist_01, bmsql_stock.s_dist_02, bmsql_stock.s_dist_03|
                          Index Cond: ((bmsql_stock.s_w_id = 975) AND (bmsql_stock.s_i_id = bmsql_order_line.ol_i_id))                                                                                                                                         |
                          Filter: (bmsql_stock.s_quantity < 12)                                                                                                                                                                                                |
                          Rows Removed by Filter: 1                                                                                                                                                                                                            |
                          Buffers: shared hit=768                                                                                                                                                                                                              |
            Planning Time: 0.755 ms                                                                                                                                                                                                                            |
            Execution Time: 1.498 ms                                                                                                                                                                                                                           |
Planning:                                                                                                                                                                                                                                                      |
  Buffers: shared hit=3                                                                                                                                                                                                                                        |
Planning Time: 0.324 ms                                                                                                                                                                                                                                        |
Execution Time: 9.796 ms                                                                                                                                                                                                                                       |

  一般SQL,失真不算很严重。

性能分析

  

高可用

  流复制模式即可。 

CN成为瓶颈

   配置多个CN,前面架设一个LVS集群,但是CN不能动态扩展。

bypass-CN(direct-DN)模式

   需要hash算法对外开放。

存储过程、函数与视图、序列、自增、标识列等特性

 

PostgreSQL SQL特性兼容性与限制

  分非分布式事务和分布式事务。

使用benchmarksql进行TPC-C测试

   因为TPC-C所有的表都co-location到warehouse_id了,所以跑TPCC是没有问题的。只不过citus的重写着实有点蠢。如下:

2021-10-07 21:21:47.037945T [239675] LOG:  duration: 97782.322 ms  execute <unnamed>: SELECT count(*) AS low_stock FROM (SELECT bmsql_stock.s_w_id, bmsql_stock.s_i_id, bmsql_stock.s_quantity FROM public.bmsql_stock_103379 bmsql_stock WHERE ((bmsql_stock.s_w_id OPERATOR(pg_catalog.=) $1) AND (bmsql_stock.s_quantity OPERATOR(pg_catalog.<) $2) AND (bmsql_stock.s_i_id OPERATOR(pg_catalog.=) ANY (SELECT bmsql_order_line.ol_i_id FROM (public.bmsql_district_103186 bmsql_district JOIN public.bmsql_order_line_103346 bmsql_order_line ON (((bmsql_order_line.ol_w_id OPERATOR(pg_catalog.=) bmsql_district.d_w_id) AND (bmsql_order_line.ol_d_id OPERATOR(pg_catalog.=) bmsql_district.d_id) AND (bmsql_order_line.ol_o_id OPERATOR(pg_catalog.>=) (bmsql_district.d_next_o_id OPERATOR(pg_catalog.-) 20)) AND (bmsql_order_line.ol_o_id OPERATOR(pg_catalog.<) bmsql_district.d_next_o_id)))) WHERE ((bmsql_district.d_w_id OPERATOR(pg_catalog.=) $3) AND (bmsql_district.d_id OPERATOR(pg_catalog.=) $4)))))) l
2021-10-07 21:21:47.037945T [239675] DETAIL:  parameters: $1 = \'974\', $2 = \'13\', $3 = \'974\', $4 = \'10\'

  同时,citus到worker节点后,执行计划的效果很不理想。有些select count(1)执行居然要几十秒,在单机时只要及时毫秒。tpmC从20万掉下到6万。

管理接口与管理工具

  除了标准的建表功能外,分布式数据库至少要支持:

  显示的广播接口,包括:到每个主worker节点,到每个主副worker节点,到每个主分片,到每个主副分片。

  显示的单播可用接口,包括:到任一worker节点,到任一分片。

  p14.6 Manual Query Propagation

查询数据分布情况

select * from pg_catalog.pg_dist_shard;   -- 查询所有表的分片名
select * from pg_catalog.pg_dist_partition;    -- 查询所有
select * from citus_tables ct;  -- 所有citus管理的表

 

TPC-H测试

   citus对TPC-H的支持不太好,准确的是说复杂关联支持不好。但凡涉及到关联字段不包含分片键、没有co-location的几乎都不支持。如下:

Vuser 1:Query Failed : select o_year, sum(case when nation = \'MOZAMBIQUE\' then volume else 0 end) / sum(volume) as mkt_share from ( select extract(year from o_orderdate) as o_year, l_extendedprice * (1 - l_discount) as volume, n2.n_name as nation from part, supplier, lineitem, orders, customer, nation n1, nation n2, region where p_partkey = l_partkey and s_suppkey = l_suppkey and l_orderkey = o_orderkey and o_custkey = c_custkey and c_nationkey = n1.n_nationkey and n1.n_regionkey = r_regionkey and r_name = \'AFRICA\' and s_nationkey = n2.n_nationkey and o_orderdate between date \'1995-01-01\' and date \'1996-12-31\' and p_type = \'STANDARD POLISHED STEEL\') all_nations group by o_year order by o_year : ERROR: complex joins are only supported when all distributed tables are co-located and joined on their distribution columns
Vuser 1:Query Failed : select cntrycode, count(*) as numcust, sum(c_acctbal) as totacctbal from ( select substr(c_phone, 1, 2) as cntrycode, c_acctbal from customer where substr(c_phone, 1, 2) in (\'23\', \'32\', \'17\', \'18\', \'16\', \'20\', \'25\') and c_acctbal > ( select avg(c_acctbal) from customer where c_acctbal > 0.00 and substr(c_phone, 1, 2) in (\'23\', \'32\', \'17\', \'18\', \'16\', \'20\', \'25\')) and not exists ( select * from orders where o_custkey = c_custkey)) custsale group by cntrycode order by cntrycode : ERROR: direct joins between distributed and local tables are not supported

  因为citus是插件化,注定了不可能和原生GP一样默认为分布式MPP而生。开启citus.enable_repartition_joins后,有10个语句默认跑不通。

CITUS注意点

postgres=# create table t_batch(id int primary key generated always as identity,d1 bigint,d2 bigint,d3 bigint);
CREATE TABLE
postgres=# SELECT create_distributed_table(\'t_batch\',\'id\');
ERROR:  cannot distribute relation: t_batch
DETAIL:  Distributed relations must not use GENERATED ... AS IDENTITY.

但是bigserial居然支持?

postgres=# create table t_batch(id bigserial primary key,d1 bigint,d2 bigint,d3 bigint);
CREATE TABLE
postgres=# SELECT create_distributed_table(\'t_batch\',\'id\');                             
 create_distributed_table 
--------------------------
 
(1 row)

序列及序列作为默认值支持

postgres=# alter table bmsql_history 
postgres-#     alter column hist_id set default nextval(\'bmsql_hist_id_seq\');
ALTER TABLE
postgres=# alter table bmsql_history add primary key (hist_id);   -- 约束必须加名字
ERROR:  cannot create constraint without a name on a distributed table
alter table bmsql_history add constraint bmsql_history_pkey primary key (hist_id);
ERROR: cannot create constraint on "bmsql_history"
  Detail: Distributed relations cannot have UNIQUE, EXCLUDE, or PRIMARY KEY constraints that do not include the partition column (with an equality operator if EXCLUDE).

 

postgres=# select pg_size_pretty(citus_relation_size(\'search_doc_new_ic\'));
 pg_size_pretty 
----------------
 10045 MB
(1 row)

Time: 1.367 ms
postgres=# select pg_size_pretty(citus_table_size(\'search_doc_new_ic\'));   -- 不应该差这么多
 pg_size_pretty 
----------------
 216 GB
(1 row)

Time: 14.957 ms
postgres=# select pg_size_pretty(citus_total_relation_size(\'search_doc_new_ic\'));
 pg_size_pretty 
----------------
 243 GB
(1 row)

主外键限制

tpch=# SELECT create_distributed_table(\'orders\', \'o_orderkey\');
NOTICE:  Copying data from local table...
NOTICE:  copying the data has completed
DETAIL:  The local data in the table is no longer visible, but is still on disk.
HINT:  To remove the local data, run: SELECT truncate_local_data_after_distributing_table($$public.orders$$)
ERROR:  cannot create foreign key constraint since relations are not colocated or not referencing a reference table
DETAIL:  A distributed table can only have foreign keys if it is referencing another colocated hash distributed table or a reference table
tpch=# \dS+ orders
                                                 Table "public.orders"
     Column      |            Type             | Collation | Nullable | Default | Storage  | Stats target | Description 
-----------------+-----------------------------+-----------+----------+---------+----------+--------------+-------------
 o_orderdate     | timestamp without time zone |           |          |         | plain    |              | 
 o_orderkey      | numeric                     |           | not null |         | main     |              | 
 o_custkey       | numeric                     |           | not null |         | main     |              | 
 o_orderpriority | character(15)               |           |          |         | extended |              | 
 o_shippriority  | numeric                     |           |          |         | main     |              | 
 o_clerk         | character(15)               |           |          |         | extended |              | 
 o_orderstatus   | character(1)                |           |          |         | extended |              | 
 o_totalprice    | numeric                     |           |          |         | main     |              | 
 o_comment       | character varying(79)       |           |          |         | extended |              | 
Indexes:
    "orders_pk" PRIMARY KEY, btree (o_orderkey)
    "order_customer_fkidx" btree (o_custkey)
Foreign-key constraints:
    "order_customer_fk" FOREIGN KEY (o_custkey) REFERENCES customer(c_custkey)
Referenced by:
    TABLE "lineitem" CONSTRAINT "lineitem_order_fk" FOREIGN KEY (l_orderkey) REFERENCES orders(o_orderkey)
Access method: heap


NOTICE:  removing table public.lineitem from metadata as it is not connected to any reference tables via foreign keys

tpch=# SELECT create_distributed_table(\'part\', \'p_partkey\');
NOTICE:  Copying data from local table...
NOTICE:  copying the data has completed
DETAIL:  The local data in the table is no longer visible, but is still on disk.
HINT:  To remove the local data, run: SELECT truncate_local_data_after_distributing_table($$public.part$$)
ERROR:  cannot create foreign key constraint since foreign keys from reference tables and local tables to distributed tables are not supported
DETAIL:  Reference tables and local tables can only have foreign keys to reference tables and local tables

参数优化建议

citus参数调整建议:
citus.shard_count=2*节点数  # 不建议太大,OLTP最大的表/16GB,OLAP系统最大的表/8GB,每个worker不能超过citus.max_adaptive_executor_pool_size。太多的话会导致广播查询时性能很差,因为每个分片都要去查询。

shard数量<=cpu数量(非超线程),因为citus是以shard而非worker为并发单位的。
# 主节点
findptdis=# insert into big_table select id, uuid() from generate_series(1,100000000) id
findptdis-# ;
INSERT 0 100000000

# 从节点
Tasks: 1539 total, 1 running, 1534 sleeping, 4 stopped, 0 zombie
%Cpu(s): 0.6 us, 0.2 sy, 0.0 ni, 99.1 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st
KiB Mem : 39462998+total, 18012609+free, 25997252 used, 18850662+buff/cache
KiB Swap: 2097144 total, 2097144 free, 0 used. 32535580+avail Mem

PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
92538 lightdb 20 0 66.2g 68288 62800 S 2.6 0.0 0:00.22 lightdb: lightdb findptdis 10.20.137.41(24542) COPY
92721 lightdb 20 0 66.2g 59784 54720 S 2.6 0.0 0:00.21 lightdb: lightdb findptdis 10.20.137.41(24920) COPY
92723 lightdb 20 0 66.2g 58908 53844 S 2.6 0.0 0:00.22 lightdb: lightdb findptdis 10.20.137.41(24932) COPY
92717 lightdb 20 0 66.2g 58760 53696 S 2.3 0.0 0:00.19 lightdb: lightdb findptdis 10.20.137.41(24884) COPY
92718 lightdb 20 0 66.2g 58072 53008 S 2.3 0.0 0:00.21 lightdb: lightdb findptdis 10.20.137.41(24896) COPY
92719 lightdb 20 0 66.2g 56568 51504 S 2.3 0.0 0:00.21 lightdb: lightdb findptdis 10.20.137.41(24908) COPY
92720 lightdb 20 0 66.2g 57468 52404 S 2.3 0.0 0:00.21 lightdb: lightdb findptdis 10.20.137.41(24916) COPY
92722 lightdb 20 0 66.2g 58900 53836 S 2.3 0.0 0:00.20 lightdb: lightdb findptdis 10.20.137.41(24926) COPY
92724 lightdb 20 0 66.2g 58956 53892 S 2.3 0.0 0:00.21 lightdb: lightdb findptdis 10.20.137.41(24934) COPY
92725 lightdb 20 0 66.2g 58396 53332 S 1.7 0.0 0:00.18 lightdb: lightdb findptdis 10.20.137.41(24936) COPY
219872 lightdb 20 0 66.2g 136436 132844 S 1.3 0.0 2:31.60 lightdb: walwriter

\timing可以测试citus和单机的延迟差距

citus.shard_max_size=32GB
citus.log_distributed_deadlock_detection=true  # 开发、测试环境
citus.local_table_join_policy=auto # 默认即可,如果需要,一般建议走参考表。除非表很大、写很不频繁、每次查询很小的数据,又很频繁(类似TA业务参数表、权限体系表,实际上又不是很合适)
citus.binary_worker_copy_format
citus.coordinator_aggregation_strategy=\'disabled\'
citus.max_intermediate_result_size=-1 
citus.all_modifications_commutative # 业务优化可用,一般会话级使用
citus.multi_task_query_log_level=log
citus.enable_repartition_joins=true
citus.enable_binary_protocol
citus.binary_worker_copy_format
citus.executor_slow_start_interval=5ms
citus.max_cached_conns_per_worker=2
citus.explain_all_tasks=true
citus.node_conninfo=\'sslmode=allow\'

主要的系统表

pg_dist_partition 存储分区表的元数据
pg_dist_colocation  每个表都有colocation
pg_dist_shard 存储表分片的信息,包括hash键范围
citus_shards 上面三个表的组合视图
pg_dist_placement 分片某一副本所属worker的信息称为placement
pg_dist_node 节点信息
citus.pg_dist_object  所有CN上创建的对象信息

citus_dist_stat_activity
citus_worker_stat_activity
citus_lock_waits
pg_dist_authinfo 
pg_dist_poolinfo
pg-dist-rebalance-strategy https://docs.citusdata.com/en/v10.2/develop/api_metadata.html#pg-dist-rebalance-strategy

主要的管理函数

citus_add_node
citus_remove_node
citus_get_active_worker_nodes
master_get_table_metadata
get_shard_id_for_distribution_column
citus_relation_size
citus_table_size
citus_total_relation_size
citus_move_shard_placement

rebalance_table_shards
The default strategy is appropriate under these circumstances:
The shards are roughly the same size
The shards get roughly the same amount of traffic
Worker nodes are all the same size/type
Shards haven’t been pinned to particular workers
  If any of these assumptions don’t hold, then the default rebalancing can result in a bad plan. In this case you may customize the strategy, using the rebalance_strategy parameter.
It’s advisable to call get_rebalance_table_shards_plan before running rebalance_table_shards, to see and verify the actions to be performed.
get_rebalance_table_shards_plan
get_rebalance_progress
citus_add_rebalance_strategy
citus_set_default_rebalance_strategy
citus_remote_connection_stats
citus_drain_node

LightDB 22c计划增强

  1. 逻辑复制同步模式实现colocation
  2. 哈希算法外置实现direct copy
  3. 多字段分片,先机构id,再客户id;先客户id,再账户id,依赖于hash算法的二次开发
  4. 支持优化器提示

常见问题

DN节点
[lightdb@hs-10-20-30-195 log]$ grep 133029 lightdb-2022-01-09_052610.log 2022-01-09 05:41:43.628964T citus lightdb@findptlocaldis 10.20.137.41(50559) client backend ROLLBACK PREPARED 42704[2022-01-09 05:36:14 CST] 0 [221460] ERROR: prepared transaction with identifier "citus_0_133029_1096882410_5065" does not exist 2022-01-09 05:41:43.628964T citus lightdb@findptlocaldis 10.20.137.41(50559) client backend ROLLBACK PREPARED 42704[2022-01-09 05:36:14 CST] 0 [221460] STATEMENT: ROLLBACK PREPARED \'citus_0133029_1096882410_5065\'

CN节点
CN节点上也有分布式事务ID找不到的日志,但是日志级别是告警。

分析:两个可能性,1、没问题。因为2PC有一个失败的话,就会去回滚。所以CN是warning,DN是error。2、真的错了,citus 2PC实现有缺陷,citus误判。2pc日志本身保存在pg_dist_transaction中,理论上不会有pg_dist_transaction有,但是DN没有的情况(验证逻辑是需要分析和测试下有一个参与者失败的时候,发了几个回滚)。citus maintainer负责定期会去轮训清理。

 

DN节点
2022-01-15 11:38:26.049486T citus lightdb@findptlocaldis 192.168.1.113(51432) client backend ROLLBACK PREPARED 55000[2022-01-15 11:33:25 CST] 0 [22567] ERROR:  prepared transaction with identifier "citus_0_53889_14465378_116756" is busy
DN节点
lightdb-2022-01-14_135814.log:2022-01-14 20:39:17.000449T citus lightdb@findptlocaldis 192.168.1.112(45070) client backend ROLLBACK PREPARED 0A000[2022-01-14 20:30:15 CST] 0 [21215] ERROR: prepared transaction belongs to another database lightdb-2022-01-14_135814.log-2022-01-14 20:39:17.000449T citus lightdb@findptlocaldis 192.168.1.112(45070) client backend ROLLBACK PREPARED 0A000[2022-01-14 20:30:15 CST] 0 [21215] HINT: Connect to the database where the transaction was prepared to finish it. lightdb-2022-01-14_135814.log-2022-01-14 20:39:17.000449T citus lightdb@findptlocaldis 192.168.1.112(45070) client backend ROLLBACK PREPARED 0A000[2022-01-14 20:30:15 CST] 0 [21215] STATEMENT: ROLLBACK PREPARED \'citus_0_14522_30628126_136054\'

CN节点报错

任何SQL都报错,如下:

 

    配置shared_preload_libraries= \'citus,auto_explain,pg_stat_statements,pg_prewarm,pg_wait_sampling,pg_stat_kcache,pg_cron,pgaudit,pg_qualstats\'。跟auto_explain无关,不配置也发生空指针,methods为NULL。

citus与auto_explain冲突问题

https://github.com/citusdata/citus/issues/2920

https://github.com/citusdata/citus/pull/4068 说是9.5解决了,实际是我们是10.2.2,仍未解决。

分类:

技术点:

相关文章: