AMCL中Kdtree基本理解

上一章讲粒子滤波的初始化时,说到初始化kd tree以及插入节点。这一章,单独讲kdtree在这里面的应用。

其实这部分代码之前也看过,但是迷迷糊糊。后来刷了一些二叉排序树的题后,又回头看这部分代码,理解也清晰了很多。

基本上就是一个多维度的二叉排序树。划分的维度由Pose的坐标(x,y,theta)来定。

下面是具体的Kdtree的生成,搜索,删除等的代码的详解。

 

1、了解kd tree的基本结构:大小,根节点,节点指针,节点数,叶子节点数目。这部分的初始化在创建树的时候进行。

// A kd tree
typedef struct
{
  // Cell size
  double size[3];//数组

  // The root node of the tree
  pf_kdtree_node_t *root;

  // The number of nodes in the tree
  int node_count, node_max_count;
  pf_kdtree_node_t *nodes;

  // The number of leaf nodes in the tree  叶子节点数目
  int leaf_count;

} pf_kdtree_t;

 

2、创建树pf_kdtree_alloc,初始化树的值。

// Create a tree
pf_kdtree_t *pf_kdtree_alloc(int max_size)
{
  pf_kdtree_t *self;

  self = calloc(1, sizeof(pf_kdtree_t));

  self->size[0] = 0.50;
  self->size[1] = 0.50;
  self->size[2] = (10 * M_PI / 180);

  self->root = NULL;

  self->node_count = 0;
  self->node_max_count = max_size;
  self->nodes = calloc(self->node_max_count, sizeof(pf_kdtree_node_t));

  self->leaf_count = 0;

  return self;
}

其中需要特别注意的是:

(1)size数组,分别表示pose(x,y,theta)的归一化尺度值。这个理解在这里看不出来,是从插入节点的代码中倒推理解的;

(2)self定义了树本身这个指针,这也是在后面代码中要用到的;(其实在前面滤波器的初始化时,也定义了这样的self指针,指向本身)

(3)node_count和leaf_count在插入节点时都进行了更新。

 

3、插入节点:直方图的统计

// Add sample to histogram
    pf_kdtree_insert(set->kdtree, sample->pose, sample->weight);

3.1 pf_kdtree_insert函数:key数组,用于设置直方图的组距的归一化常量,可以自己修改。

// Insert a pose into the tree.
void pf_kdtree_insert(pf_kdtree_t *self, pf_vector_t pose, double value)
{
  int key[3];

  key[0] = floor(pose.v[0] / self->size[0]);
  key[1] = floor(pose.v[1] / self->size[1]);
  key[2] = floor(pose.v[2] / self->size[2]);

  self->root = pf_kdtree_insert_node(self, NULL, self->root, key, value);

  return;
}

3.2  pf_kdtree_insert_node函数:递归将这些粒子加入到kdtree中

// Insert a node into the tree
pf_kdtree_node_t *pf_kdtree_insert_node(pf_kdtree_t *self, pf_kdtree_node_t *parent,
                                        pf_kdtree_node_t *node, int key[], double value)
{
  int i;
  int split, max_split;

  // If the node doesnt exist yet...初始化
  if (node == NULL)
  {
    assert(self->node_count < self->node_max_count);
    node = self->nodes + self->node_count++;//指针加整数,表示指针移位
    memset(node, 0, sizeof(pf_kdtree_node_t));

    node->leaf = 1;

    if (parent == NULL)
      node->depth = 0;
    else
      node->depth = parent->depth + 1;

    for (i = 0; i < 3; i++)
      node->key[i] = key[i];

    node->value = value;
    self->leaf_count += 1;
  }

  // If the node exists, and it is a leaf node...
  else if (node->leaf)
  {
    // If the keys are equal, increment the value
    if (pf_kdtree_equal(self, key, node->key))
    {
      node->value += value;//将key值一样的Pose的值累加
    }

    // The keys are not equal, so split this node
    else
    {
      // Find the dimension with the largest variance and do a mean    取差值最大的维度,取均值,然后分割
      // split
      max_split = 0;
      node->pivot_dim = -1;
      for (i = 0; i < 3; i++)
      {
        split = abs(key[i] - node->key[i]);
        if (split > max_split)
        {
          max_split = split;
          node->pivot_dim = i;
        }
      }
      assert(node->pivot_dim >= 0);

      node->pivot_value = (key[node->pivot_dim] + node->key[node->pivot_dim]) / 2.0;//取相同维度两个值的中间值

      if (key[node->pivot_dim] < node->pivot_value)
      {
        node->children[0] = pf_kdtree_insert_node(self, node, NULL, key, value);//左子树
        node->children[1] = pf_kdtree_insert_node(self, node, NULL, node->key, node->value);//右子树
      }
      else
      {
        node->children[0] = pf_kdtree_insert_node(self, node, NULL, node->key, node->value);
        node->children[1] = pf_kdtree_insert_node(self, node, NULL, key, value);
      }

      node->leaf = 0;
      self->leaf_count -= 1;
    }
  }

插入数据的基本流程:

1)新数据(pose, weight)加进来,计算key值;

2)如果是第一个数据,此时根节点为空,所以生成第一个节点,并初始化,设置leaf=1,为叶子节点,也是root节点,相应的count增加;第一个数据插入完成

3)第二个数据进来,重复1

4)此时根节点存在,且为叶子节点,判断第二个数据的key值是否等于根节点的key值,若等于,则更新根节点的value值,即同一个key值,权重累加。若不等于,则要根据key差值最大的维度划分左右子节点。

  4.1)计算key[i]的差值,确定是在(x,y,theta)中的一个维度继续进行划分,注意,这个划分就相当于直方图中划分不同组距。

  4.2)如当前key的某个维度的值大于根节点的key的值,则为右节点,根变成左节点;反之。(二叉排序树)

  4.3)左右子节点执行递归调用,对新生成的两个子节点初始化(执行2)

  4.4)将根节点的叶子节点的标识设置为0,因为此时已经不再是叶子节点了。第二个数据插入完成,此时树的状态是两个左右子节点,同时,第一个插入的节点根节点又是子节点,第二个节点是叶子节点。

5)第三个数据进来,重复1

6)此时根节点存在,且存在子节点,判断根节点的划分维度的key值与当前插入数据的key值比较,然后递归调用。第三个数据插入按照前面的流程进行,此时第一个或第二个节点成为root,与第三个节点比较插入。

7)所有数插入完成。

流程写的有些绕。没办法,我自己在纸上举特例走了一遍,才搞清楚里面插入的细节。看不懂的话,就自己举个一维的例子,模拟一下就清晰了。后期有时间再画动图详解。

 

4、查找节点:基本思路就是二叉搜索树的递归查找。如果把插入节点搞清楚了,这个就很清晰了。判断key和根节点的比较,大的找右节点,小的找左节点,递归。

pf_kdtree_node_t *pf_kdtree_find_node(pf_kdtree_t *self, pf_kdtree_node_t *node, int key[])
{
  if (node->leaf)
  {
    //printf("find  : leaf %p %d %d %d\n", node, node->key[0], node->key[1], node->key[2]);//这里不明白key的作用

    // If the keys are the same...
    if (pf_kdtree_equal(self, key, node->key))
      return node;
    else
      return NULL;
  }
  else
  {
    //printf("find  : brch %p %d %f\n", node, node->pivot_dim, node->pivot_value);

    assert(node->children[0] != NULL);
    assert(node->children[1] != NULL);

    // If the keys are different...
    if (key[node->pivot_dim] < node->pivot_value)
      return pf_kdtree_find_node(self, node->children[0], key);      else
      return pf_kdtree_find_node(self, node->children[1], key);
  }

  return NULL;
}

 

5、上面的几个函数,在粒子滤波器的初始化中用到,初始化的时候,需要重新计算粒子集的均值和协方差,会调用pf_kdtree_cluster函数:

// Cluster the leaves in the tree 聚合叶子节点
void pf_kdtree_cluster(pf_kdtree_t *self)
{
  int i;
  int queue_count, cluster_count;
  pf_kdtree_node_t **queue, *node;  //queue存储的是每一个kdtree节点的数组指针

  queue_count = 0;
  queue = calloc(self->node_count, sizeof(queue[0])); // 申请分配内存node_count*sizeof大小

  // Put all the leaves in a queue 遍历所有节点表示,先找到叶子节点,并标记为-1
  for (i = 0; i < self->node_count; i++)
  {
    node = self->nodes + i;
    if (node->leaf)
    {
      node->cluster = -1; //标记-1
      assert(queue_count < self->node_count);
      queue[queue_count++] = node;//加入数组

      // TESTING; remove
      assert(node == pf_kdtree_find_node(self, self->root, node->key));
    }
  }

  cluster_count = 0;

  // Do connected components for each node再把这些叶子节点标记序号
  while (queue_count > 0)
  {
    node = queue[--queue_count];

    // If this node has already been labelled, skip it 避免重复
    if (node->cluster >= 0)
      continue;

    // Assign a label to this cluster 表示叶子节点的序号
    node->cluster = cluster_count++;

    // Recursively label nodes in this cluster 递归设置与该节点临近节点的序号
    pf_kdtree_cluster_node(self, node, 0);
  }

  free(queue);
  return;
}

 

pf_kdtree_cluster_node函数

// Recursively label nodes in this cluster
void pf_kdtree_cluster_node(pf_kdtree_t *self, pf_kdtree_node_t *node, int depth)
{
  int i;
  int nkey[3];
  pf_kdtree_node_t *nnode;

  for (i = 0; i < 3 * 3 * 3; i++)  //3维
  {
      //这里查找同一簇的条件是找key在三个维度下的左中右三个节点的key值,标记同一簇
    nkey[0] = node->key[0] + (i / 9) - 1;
    nkey[1] = node->key[1] + ((i % 9) / 3) - 1;
    nkey[2] = node->key[2] + ((i % 9) % 3) - 1;

    nnode = pf_kdtree_find_node(self, self->root, nkey);  //从根节点开始找与设定的nkey相同的节点
    if (nnode == NULL)//如果没找到,找下一个
      continue;

    assert(nnode->leaf);

    // This node already has a label; skip it.  The label should be
    // consistent, however.
    if (nnode->cluster >= 0)   //判断是否也是叶子节点,是就跳过
    {
      assert(nnode->cluster == node->cluster);
      continue;
    }

    // Label this node and recurse
    nnode->cluster = node->cluster;  //临近的点标识成一样的序号

    pf_kdtree_cluster_node(self, nnode, depth + 1);//一层层遍历,找到整棵树叶子节点临近的点并标识一样的序号。
  }
  return;
}

这里有疑问,为什么找叶子节点附近的点就可以看成一个簇,那非叶子节点就单独不处理了么?

 

pf_kdtree_get_cluster函数:查找某个位姿在树的哪个簇里面,返回簇的序号。

int pf_kdtree_get_cluster(pf_kdtree_t *self, pf_vector_t pose)
{
  int key[3];
  pf_kdtree_node_t *node;

  key[0] = floor(pose.v[0] / self->size[0]);
  key[1] = floor(pose.v[1] / self->size[1]);
  key[2] = floor(pose.v[2] / self->size[2]);

  node = pf_kdtree_find_node(self, self->root, key);
  if (node == NULL)
    return -1;
  return node->cluster;
}

 

kdtree文件里还有一个函数,pf_kdtree_get_prob在后面会出现。看完前面的代码,这里就很清晰了,查找Pose并返回权重值作为概率。(查找和插入的过程一样)

// Determine the probability estimate for the given pose. TODO: this
// should do a kernel density estimate rather than a simple histogram.
double pf_kdtree_get_prob(pf_kdtree_t *self, pf_vector_t pose)
{
  int key[3];
  pf_kdtree_node_t *node;

  key[0] = floor(pose.v[0] / self->size[0]);
  key[1] = floor(pose.v[1] / self->size[1]);
  key[2] = floor(pose.v[2] / self->size[2]);

  node = pf_kdtree_find_node(self, self->root, key);
  if (node == NULL)
    return 0.0;
  return node->value;
}

 

本章写到这,继续看代码咯~

 

posted @ 2021-07-15 13:26  水水滴答  阅读(502)  评论(0)    收藏  举报