🗒️Haptic Research Introduction
00 分钟
2024-3-17
2024-6-17
password
Sub-item
type
status
slug
summary
tags
category
icon
Parent item
date
😀
Description of project status.
 
Paper:
 

1、Data

Taking subject_data_0.p as an example, this file contains a total of 18 sets of data, each following the format outlined below.
Data: data[0] Python Dictionary
Key
Type
Example
Summary
str
‘Happiness’
different kinds of emotions
list
[('Stroke', 20, 446)]
(GESTURE_NAME, start_time, end_time)
numpy.ndarray
dim(466,8,18)/see text
(duration,rows,columns)
int
1
element in [0-2]
int
0
element in [0-39]
The format of other subject_data_num files is the same.

2、Work

2.1 Classification

When we receive a new set of sensor data, we can classify it to determine the specific gesture and emotion it represents.
  • gesture classification
    • mapping: frames —> gesture
  • emotion classification
    • mapping: frames —> emotion
💡
Supervised learning is a good way.

2.2 Data Dimensionality Reduction

We aim to perform dimensionality reduction on each frame from 8x18 to 2x4, enabling its use on low-precision devices.

2.3 Data analysis

We can do some data analysis to this data.
  • Heatmaps of different hand gestures
  • Peak Force per frame
  • Mean Force per frame
    • ……
💡
Maybe we should try to find some significance about data.
💡
Statistical Analysis allows us to do many things, but I'm unsure of its purpose. What can we do with the data we get from the analysis? If it doesn't help us reach our goals, do we still need to do it?

3、Goal

  • Our work can help in making wearable devices for people with disabilities, especially for those who are blind, by considering how the device affects emotions.
  • Regular wearable devices can also take into account the impact of our research on emotions.

 
This is the original readme file:
readme.txt (1)
there are some problems:
  • We don’t have subject_data_12 and subject_data_17
    • → ignore
  • The README document states that the elements of gesture are tuples, but in reality, it is a list with a single element, and that element is a tuple.
    • notion image
      → Use tuple data directly
       

      1、数据集质量低
      2、部分描述
      3、已经可以实现 通过多目标跟踪

      数据映射算法(Data Mapping Algorithm)

      数据映射算法是将记录的触觉数据转换为可以在触觉设备上渲染的信号的过程。该算法分为两个主要步骤:多目标跟踪(Multi-Object Tracking)和轨迹优化(Trajectory Optimization)。

      算法步骤

      1. 多目标跟踪(Multi-Object Tracking)
          • 本地最大值提取(Local Maxima Extraction)
            • 从每一帧的触觉数据中提取本地最大值点,即那些在空间上比其周围邻居压力值更高的点。这一步的目的是简化后续的计算,使跟踪过程更加高效。
          • 轨迹生成(Trajectory Generation)
            • 使用多目标跟踪算法,从提取的本地最大值点中生成轨迹。这些轨迹代表了手指或手掌在触觉传感器上的移动路径。具体来说,算法通过求解一个优化问题来生成轨迹,长且高强度的路径得分更高,因此更可能被选为轨迹。
      1. 轨迹优化(Trajectory Optimization)
          • 工作区约束(Workspace Constraints)
            • 对于每一个轨迹,确定其对应的触觉设备上的最佳渲染区域。由于触觉设备的触点数量少于传感器的感应点数量,因此需要将高强度的轨迹映射到设备的有限触点上。
            • 每个触点都有一个定义好的工作区,例如一个圆形区域。算法通过测试不同的变换(例如平移)来确定触点的最佳位置,以最大化轨迹的覆盖和强度。
          • 优化选择(Optimal Selection)
            • 对于每一个变换,计算轨迹在触点工作区内的得分,并选择总得分最高的变换组合。最终输出的是在这些约束下的最佳轨迹集合。
            •  
              notion image
               
               

评论