由frankmocap得到的.pkl文件转为.bvh或者.fbx

2023-05-16

需求

由Frankmocap所得到的.pkl文件转为blender里的.bvh或者Maya里的.fbx
Frankmocap_github项目地址
2D转3D转.bvh可以看VideoTo3dPoseAndBvh,.bvh转3D相对来说好转一点毕竟.bvh中的关节点与关节点组成的向量有这6自由度信息,但是3D点只有3自由度要转.bvh就要限制自由度。

问题

1.到2022年5月9号为止项目原作者还没有更新这一功能,即提供一个blender的addon;
2.第三方作者魔改了很多相关的插件,可以为EasyMocap,Vibe,Frankmocap等提供blender接口;然而很多可能满足smpl需求却不满足smplx需求;
3.blender里的python环境,package的安装相关教程较少,官方文档又不想看。

相关 1

第三方addon,这个小哥做了一个addon,基于VIBE改的(VIBE里由.pkl转.bvh的工作),但是这个插件偏向于EasyMocap相关功能较多,我导入frankmocap的pkl发现blender里只能生成14个点;在原Frankmocap项目的issue里可以找到这个小哥的很多评论,不会的也可以联系他;
编辑—>偏好设置---->安装addon即安装zip---->N调出addon界面---->导入mocap目录(内含一个视频extract出的很多.pkl)
请添加图片描述

相关 2

第二个忘记是哪找的了,是一个python文件fbx_output_FRANKMOCAP.py,这个程序里面导入模块import bpy,很明显是要在blender里面运行的;blender用python是真的不友好,要去看官方手册,里面的python控制台我不知道有什么用。。。。而且不同版本的blender也不一样会报很多奇怪的错误。

环境

ubuntu + blender3.1.2稳定版;

下载文件

需要一个SMPL_unity_v.1.0.0.zip文件(可以去SMPL作者官网那里下载),解压后放到VIBE项目下,把fbx_output_FRANKMOCAP.py插件放到VIBE项目下的lib/utils下面,原项目的环境不用配,只需要配置插件里的环境就好了,配置方法如下;

安装package

只要把要的包放到blender目录下的site-packages里就好了,我的操作是自己建一个和blender的python版本一样的conda虚拟环境,然后进到环境里用pip install packages -t directory命令装到blender目录下的site-packages里;

运行

运行脚本或者.py文件可以输入以下命令:./blender --python ~/Desktop/VIBE_pkl2fbx/lib/utils/fbx_output_FRANKMOCAP.py,原文件是有argparse命令行参数的,我不知道通过blender环境怎么输入,就直接改了原代码,把要输入的参数都直接修改好了,如果有人要用代码的话得自己改一下里面的路径和参数。最终可以在blender里导出.bvh或者.fbx文件。
Blender Python API中文介绍

blender的用法(ubuntu)

1.可以直接命令行下载安装,也可以去官网下载压缩包解压,我是解压的;
2.Blender左下角可以调出时间线功能,一般.bvh和.fbx都是一段骨骼或者蒙皮的动画,时间线可以播放动作;
3.找到blender安装目录下的bin下面的python解释器,用ubuntu终端输入命令行./python3.10可以进到blender解释器里,虽然好像没什么用;
4.blender软件里的python控制台是已经进到他那个python环境里了,在这里也运行不了啥程序,除了bpy模块下的函数我不知有什么其他作用。这时候有两种方法:第一种就是用scripting去打开.py文件再运行;第二种就是使用Ubuntu的终端,首先来到blender安装目录,运行./blender就可以打开这个软件(双击图标也可以),运行脚本或者.py文件可以输入以下命令:./blender --python directory.py
5.blender肯定可以通过IDE或者其他API来使用python,但是作者只想用一个工具就没有深入。
6.所有效果图其实是一段会动的视频和frankmocap对应的输入视频是一样的动作,缺点是在这个插件里手部点没有输出。

请添加图片描述

修改过后的代码

# -*- coding: utf-8 -*-

# Max-Planck-Gesellschaft zur Förderung der Wissenschaften e.V. (MPG) is
# holder of all proprietary rights on this computer program.
# You can only use this computer program if you have closed
# a license agreement with MPG or you get the right to use the computer
# program from someone who is authorized to grant you that right.
# Any use of the computer program without a valid license is prohibited and
# liable to prosecution.
#
# Copyright©2019 Max-Planck-Gesellschaft zur Förderung
# der Wissenschaften e.V. (MPG). acting on behalf of its Max Planck Institute
# for Intelligent Systems. All rights reserved.
#
# Contact: ps-license@tuebingen.mpg.de
#
# Author: Joachim Tesch, Max Planck Institute for Intelligent Systems, Perceiving Systems
#
# Create keyframed animated skinned SMPL mesh from .pkl pose description
#
# Generated mesh will be exported in FBX or glTF format
#
# Notes:
#  + Male and female gender models only
#  + Script can be run from command line or in Blender Editor (Text Editor>Run Script)
#  + Command line: Install mathutils module in your bpy virtualenv with 'pip install mathutils==2.81.2'

import os
import sys
import bpy
import time
import joblib
import argparse
import numpy as np
import addon_utils
from math import radians
from mathutils import Matrix, Vector, Quaternion, Euler
import pickle
import os.path
import pandas as pd


# Globals
male_model_path = '/home/dms/Desktop/VIBE_pkl2fbx/data/SMPL_unity_v.1.0.0/smpl/Models/SMPL_m_unityDoubleBlends_lbs_10_scale5_207_v1.0.0.fbx'
female_model_path = '/home/dms/Desktop/VIBE_pkl2fbx/data/SMPL_unity_v.1.0.0/smpl/Models/SMPL_f_unityDoubleBlends_lbs_10_scale5_207_v1.0.0.fbx'

fps_source = 30
fps_target = 30

gender = 'male'

start_origin = 1

bone_name_from_index = {
    0 : 'Pelvis',
    1 : 'L_Hip',
    2 : 'R_Hip',
    3 : 'Spine1',
    4 : 'L_Knee',
    5 : 'R_Knee',
    6 : 'Spine2',
    7 : 'L_Ankle',
    8: 'R_Ankle',
    9: 'Spine3',
    10: 'L_Foot',
    11: 'R_Foot',
    12: 'Neck',
    13: 'L_Collar',
    14: 'R_Collar',
    15: 'Head',
    16: 'L_Shoulder',
    17: 'R_Shoulder',
    18: 'L_Elbow',
    19: 'R_Elbow',
    20: 'L_Wrist',
    21: 'R_Wrist',
    22: 'L_Hand',
    23: 'R_Hand'
}

# Helper functions

# Computes rotation matrix through Rodrigues formula as in cv2.Rodrigues
# Source: smpl/plugins/blender/corrective_bpy_sh.py
def Rodrigues(rotvec):
    theta = np.linalg.norm(rotvec)
    r = (rotvec/theta).reshape(3, 1) if theta > 0. else rotvec
    cost = np.cos(theta)
    mat = np.asarray([[0, -r[2], r[1]],
                      [r[2], 0, -r[0]],
                      [-r[1], r[0], 0]])
    return(cost*np.eye(3) + (1-cost)*r.dot(r.T) + np.sin(theta)*mat)


# Setup scene
def setup_scene(model_path, fps_target):
    scene = bpy.data.scenes['Scene']

    ###########################
    # Engine independent setup
    ###########################

    scene.render.fps = fps_target

    # Remove default cube
    if 'Cube' in bpy.data.objects:
        bpy.data.objects['Cube'].select_set(True)
        bpy.ops.object.delete()

    # Import gender specific .fbx template file
    bpy.ops.import_scene.fbx(filepath=model_path)


# Process single pose into keyframed bone orientations
def process_pose(current_frame, pose, trans, pelvis_position):

    if pose.shape[0] == 72:
        rod_rots = pose.reshape(24, 3)
    else:
        rod_rots = pose.reshape(26, 3)

    mat_rots = [Rodrigues(rod_rot) for rod_rot in rod_rots]

    # Set the location of the Pelvis bone to the translation parameter
    armature = bpy.data.objects['Armature']
    bones = armature.pose.bones

    # Pelvis: X-Right, Y-Up, Z-Forward (Blender -Y)

    # Set absolute pelvis location relative to Pelvis bone head
    bones[bone_name_from_index[0]].location = Vector((100*trans[1], 100*trans[2], 100*trans[0])) - pelvis_position

    # bones['Root'].location = Vector(trans)
    bones[bone_name_from_index[0]].keyframe_insert('location', frame=current_frame)

    for index, mat_rot in enumerate(mat_rots, 0):
        if index >= 24:
            continue

        bone = bones[bone_name_from_index[index]]

        bone_rotation = Matrix(mat_rot).to_quaternion()
        quat_x_90_cw = Quaternion((1.0, 0.0, 0.0), radians(-90))
        quat_z_90_cw = Quaternion((0.0, 0.0, 1.0), radians(-90))

        if index == 0:
            # Rotate pelvis so that avatar stands upright and looks along negative Y avis
            bone.rotation_quaternion = (quat_x_90_cw @ quat_z_90_cw) @ bone_rotation
        else:
            bone.rotation_quaternion = bone_rotation

        bone.keyframe_insert('rotation_quaternion', frame=current_frame)

    return


# Process all the poses from the pose file
def process_poses(
        input_path,
        gender,
        fps_source,
        fps_target,
        start_origin,
        person_id=1,
):

    print('Processing: ' + input_path)

    # comeco da alteracao que fiz
    pkl_path = '/home/dms/Desktop/VIBE_pkl2fbx/data/mocap/'
    list_dir = os.listdir(pkl_path)
    s_list = sorted(list_dir)
    append_data = (pd.DataFrame(pd.DataFrame.from_dict(pickle.load(open(pkl_path+s_list[0],'rb'))).pred_output_list[0]['pred_body_pose']))
    
    for i in s_list:
        if i == s_list[0]:
            print('0')
            next
        else:
            print(i)
            frank_data = pickle.load(open(pkl_path+i,'rb'))
            frank = pd.DataFrame.from_dict(frank_data)
            if 'pred_body_pose' not in frank.columns:
                next
            frank_pose = pd.DataFrame(frank.pred_output_list[0]['pred_body_pose'])
            append_data = append_data.append(frank_pose)

    # print(append_data)
    poses = append_data.to_numpy()
    ########## Fim da alteracao que fiz

    # data = joblib.load(input_path)
    # poses = data[person_id]['pose']
    

    trans = np.zeros((poses.shape[0], 3))

    if gender == 'female':
        model_path = female_model_path
        for k,v in bone_name_from_index.items():
            bone_name_from_index[k] = 'f_avg_' + v
    elif gender == 'male':
        model_path = male_model_path
        for k,v in bone_name_from_index.items():
            bone_name_from_index[k] = 'm_avg_' + v
    else:
        print('ERROR: Unsupported gender: ' + gender)
        sys.exit(1)

    # Limit target fps to source fps
    if fps_target > fps_source:
        fps_target = fps_source

    print(f'Gender: {gender}')
    print(f'Number of source poses: {str(poses.shape[0])}')
    print(f'Source frames-per-second: {str(fps_source)}')
    print(f'Target frames-per-second: {str(fps_target)}')
    print('--------------------------------------------------')

    setup_scene(model_path, fps_target)

    scene = bpy.data.scenes['Scene']
    sample_rate = int(fps_source/fps_target)
    scene.frame_end = (int)(poses.shape[0]/sample_rate)

    # Retrieve pelvis world position.
    # Unit is [cm] due to Armature scaling.
    # Need to make copy since reference will change when bone location is modified.
    bpy.ops.object.mode_set(mode='EDIT')
    pelvis_position = Vector(bpy.data.armatures[0].edit_bones[bone_name_from_index[0]].head)
    bpy.ops.object.mode_set(mode='OBJECT')

    source_index = 0
    frame = 1

    offset = np.array([0.0, 0.0, 0.0])

    while source_index < poses.shape[0]:
        print('Adding pose: ' + str(source_index))

        if start_origin:
            if source_index == 0:
                offset = np.array([trans[source_index][0], trans[source_index][1], 0])

        # Go to new frame
        scene.frame_set(frame)

        process_pose(frame, poses[source_index], (trans[source_index] - offset), pelvis_position)
        source_index += sample_rate
        frame += 1

    return frame


def export_animated_mesh(output_path):
    # Create output directory if needed
    output_dir = os.path.dirname(output_path)
    if not os.path.isdir(output_dir):
        os.makedirs(output_dir, exist_ok=True)

    # Select only skinned mesh and rig
    bpy.ops.object.select_all(action='DESELECT')
    bpy.data.objects['Armature'].select_set(True)
    bpy.data.objects['Armature'].children[0].select_set(True)

    if output_path.endswith('.glb'):
        print('Exporting to glTF binary (.glb)')
        # Currently exporting without shape/pose shapes for smaller file sizes
        bpy.ops.export_scene.gltf(filepath=output_path, export_format='GLB', export_selected=True, export_morph=False)
    elif output_path.endswith('.fbx'):
        print('Exporting to FBX binary (.fbx)')
        bpy.ops.export_scene.fbx(filepath=output_path, use_selection=True, add_leaf_bones=False)
    else:
        print('ERROR: Unsupported export format: ' + output_path)
        sys.exit(1)

    return


if __name__ == '__main__':
    try:
        if bpy.app.background:

            parser = argparse.ArgumentParser(description='Create keyframed animated skinned SMPL mesh from VIBE output')
            parser.add_argument('--input', dest='input_path', type=str, required=True,
                                help='Input file or directory')
            parser.add_argument('--output', dest='output_path', type=str, required=True,
                                help='Output file or directory')
            parser.add_argument('--fps_source', type=int, default=fps_source,
                                help='Source framerate')
            parser.add_argument('--fps_target', type=int, default=fps_target,
                                help='Target framerate')
            parser.add_argument('--gender', type=str, default=gender,
                                help='Always use specified gender')
            parser.add_argument('--start_origin', type=int, default=start_origin,
                                help='Start animation centered above origin')
            parser.add_argument('--person_id', type=int, default=1,
                                help='Detected person ID to use for fbx animation')

            args = parser.parse_args()

            input_path = args.input_path
            output_path = args.output_path

            if not os.path.exists(input_path):
                print('ERROR: Invalid input path')
                sys.exit(1)

            fps_source = 30
            fps_target = 30

            gender = 'male'

            start_origin = 1

        input_path = '~/Desktop/VIBE_pkl2fbx/data/mocap'
        output_path = '~/Desktop/VIBE_pkl2fbx/data/output/output.fbx'

        # end if bpy.app.background

        startTime = time.perf_counter()

        # Process data
        cwd = os.getcwd()

        # Turn relative input/output paths into absolute paths
        # if not input_path.startswith(os.path.sep):
        #     input_path = os.path.join(cwd, input_path)
        #
        # if not output_path.startswith(os.path.sep):
        #     output_path = os.path.join(cwd, output_path)

        print('Input path: ' + input_path)
        print('Output path: ' + output_path)

        if not (output_path.endswith('.fbx') or output_path.endswith('.glb')):
            print('ERROR: Invalid output format (must be .fbx or .glb)')
            sys.exit(1)

        # # Process pose file
        # if input_path.endswith('.pkl'):
        #     if not os.path.isfile(input_path):
        #         print('ERROR: Invalid input file')
        #         sys.exit(1)
        #
        #     poses_processed = process_poses(
        #         input_path=input_path,
        #         gender='male',
        #         fps_source=30,
        #         fps_target=30,
        #         start_origin=1,
        #         # person_id=args.person_id,
        #         person_id = 0
        #     )
        #     export_animated_mesh(output_path)

        poses_processed = process_poses(
            input_path=input_path,
            gender='male',
            fps_source=30,
            fps_target=30,
            start_origin=1,
            # person_id=args.person_id,
            person_id=0
        )
        export_animated_mesh(output_path)

        print('--------------------------------------------------')
        print('Animation export finished.')
        print(f'Poses processed: {str(poses_processed)}')
        print(f'Processing time : {time.perf_counter() - startTime:.2f} s')
        print('--------------------------------------------------')
        sys.exit(0)

    except SystemExit as ex:
        if ex.code is None:
            exit_status = 0
        else:
            exit_status = ex.code

        print('Exiting. Exit status: ' + str(exit_status))

        # Only exit to OS when we are not running in Blender GUI
        if bpy.app.background:
            sys.exit(exit_status)

相关 3

同是Facebook研究院出品的fairmotion项目里主要提供smpl和bvh格式相互之间的转化和其他很多tasks,但是项目文档说明较少,几乎全是接口函数,如果想对应自己的需求的话需要自己去看里面函数的定义。

环境

这个环境十分好配,采取以下两种方法都可以,只不过由于human_body_prior库有改动,会出现一些小问题,问题出来再改就行了。
在这里插入图片描述

运行

代码路径fairmotion/fairmotion/data/frankmocap.py 下的原代码运行后会报ValueError: could not broadcast input array from shape (8,3,3) into shape (1,3,3)问题我没有解决,用的是另外一个代码,而且需要对frankmocap得到的很多pkl进行一个处理合并为一个pkl。处理好之后就可以运行frankmocap.py文件从smpl数据格式转到bvh格式了。

数据预处理代码:

# frankdata_preprocess.py
import os
import pickle
import os.path as op

processed_data = []
root = r'./frankmocap_output/mocap'
for name in sorted(os.listdir(op.join(root))):
    file_path = op.join(root, name)
    preprocess_data = pickle.load(open(file_path, 'rb'))
    motion_key = list(preprocess_data.keys())[6]
    motion_data = preprocess_data[motion_key]  # 找到'pred_output_list'键对应的值,[{}]
    processed_data.append(motion_data)  # [[{}], [{}], [{}]]

pickle.dump(processed_data, open(r'./frankmocap_output/processed_data.pkl', 'wb'))

frankmocap2bvh代码:

# Copyright (c) Facebook, Inc. and its affiliates.
# fairmotion/fairmotion/data/frankmocap.py
import numpy as np
import pickle
import torch

from fairmotion.data import amass, bvh
from fairmotion.core import motion as motion_classes
from fairmotion.utils import constants, utils
from fairmotion.ops import conversions, motion as motion_ops


def get_smpl_base_position(bm, betas):
    pose_body_zeros = torch.zeros((1, 3 * (22 - 1)))
    body = bm(pose_body=pose_body_zeros, betas=betas)
    base_position = body.Jtr.detach().numpy()[0, 0:22]
    return base_position


def compute_im2sim_scale(
        joints_img,
        base_position,
):
    left_leg_sim = np.linalg.norm(
        base_position[amass.joint_names.index("lknee")] - base_position[amass.joint_names.index("lankle")])
    # indices from from frankmocap.bodymocap.constants
    left_leg_img = np.linalg.norm(joints_img[29][:2] - joints_img[30][:2])
    right_leg_sim = np.linalg.norm(
        base_position[amass.joint_names.index("rknee")] - base_position[amass.joint_names.index("rankle")])
    right_leg_img = np.linalg.norm(joints_img[25][:2] - joints_img[26][:2])
    return (left_leg_sim + right_leg_sim) / (left_leg_img + right_leg_img)


def load(
        file,
        motion=None,
        bm_path=None,
        motion_key=None,
        estimate_root=False,
        scale=1.0,
        load_skel=True,
        load_motion=True,
        v_up_skel=np.array([0.0, 1.0, 0.0]),
        v_face_skel=np.array([0.0, 0.0, 1.0]),
        v_up_env=np.array([0.0, 1.0, 0.0]),
):
    processed_data = pickle.load(open(file, "rb"))
    # if motion_key is None:
    #     motion_key = list(all_data.keys())[6]   # 找到'pred_output_list'键对应的值
    motion_data = processed_data
    bm = amass.load_body_model(bm_path)
    betas = torch.Tensor(np.array(motion_data[0][0]["pred_betas"])[:]).to("cpu")
    # img_shape = motion_data[0]["pred_output_list"][0]["img_shape"]
    num_joints = len(amass.joint_names)
    skel = amass.create_skeleton_from_amass_bodymodel(bm, betas, len(amass.joint_names), amass.joint_names)
    joint_names = [j.name for j in skel.joints]

    num_frames = len(motion_data)
    print(num_frames)
    T = np.random.rand(num_frames, num_joints, 4, 4)
    T[:] = constants.EYE_T
    # Use lowest point of right/left ankle from first image frame as reference
    ref_root_y = np.min((
        motion_data[0][0]["pred_joints_img"][25][1],
        motion_data[0][0]["pred_joints_img"][30][1]
    ))
    for i in range(num_frames):
        for j in range(num_joints):
            T[i][joint_names.index(amass.joint_names[j])] = conversions.R2T(
                np.array(motion_data[i][0]["pred_rotmat"][0])[j]
            )
        if estimate_root:
            R_root = conversions.T2R(T[i][0])
            p_root = np.zeros(3)

            base_position = get_smpl_base_position(bm, betas)
            # compute scale as ratio of limb length in img and bm
            im2sim_scale = compute_im2sim_scale(
                motion_data[i][0]["pred_joints_img"],
                base_position,
            )
            p_root[0] = np.mean((
                motion_data[i][0]["pred_joints_img"][27][0],
                motion_data[i][0]["pred_joints_img"][28][0]
            )) * im2sim_scale
            root_y = np.mean((
                motion_data[i][0]["pred_joints_img"][27][1],
                motion_data[i][0]["pred_joints_img"][28][1]
            ))
            p_root[2] = (ref_root_y - root_y) * im2sim_scale
            # p_root[1] = np.max((
            #     np.linalg.norm(T[i][amass.joint_names.index("root")] - T[i][amass.joint_names.index("lankle")]),
            #     np.linalg.norm(T[i][amass.joint_names.index("root")] - T[i][amass.joint_names.index("rankle")]),
            # ))
            # print(p_root[1])
            T[i][0] = conversions.Rp2T(R_root, p_root)
    motion = motion_classes.Motion.from_matrix(T, skel)

    motion.set_fps(60)
    motion = motion_ops.rotate(
        motion,
        conversions.Ax2R(conversions.deg2rad(-90)),
    )
    # post process to ensure character stays above floor
    positions = motion.positions(local=False)
    for i in range(motion.num_frames()):
        ltoe = positions[i][amass.joint_names.index("ltoe")][2]
        rtoe = positions[i][amass.joint_names.index("rtoe")][2]
        offset = min(ltoe, rtoe)
        if offset < 0.05:
            # print(offset)
            R, p = conversions.T2Rp(T[i][0])
            p[2] += 0.05 - offset
            T[i][0] = conversions.Rp2T(R, p)

    motion = motion_classes.Motion.from_matrix(T, skel)
    motion = motion_ops.rotate(
        motion,
        conversions.Ax2R(conversions.deg2rad(-90)),
    )
    return motion


if __name__ == '__main__':
    file = r'../../daoboke/frankmocap_output/processed_data.pkl'
    motion = load(file=file, bm_path=r'../../body_models/smplh/neutral/model.npz')
    bvh.save(motion, filename=r'../../daoboke/frankmocap_output/output.bvh')

如果想要简单的从一般的smpl格式数据转到bvh的话,可以在fairmotion项目中运行以下代码,然而在fairmotion中我未能成功完成bvh2smpl代码:

# smpl2bvh.py
import torch
from human_body_prior.body_model.body_model import BodyModel
from fairmotion.data import amass, bvh


def load_body_model(bm_path, num_betas=10, model_type="smplh"):
    comp_device = torch.device("cpu")
    bm = BodyModel(
        bm_path=bm_path,
        num_betas=num_betas,
        model_type=model_type
    ).to(comp_device)
    return bm


# Load file
file = './smplData/DanceDB/20120731_StefanosTheodorou/Stefanos_1os_antrikos_karsilamas_C3D_poses.npz'
bm_path = '../body_models/smplh/{}/model.npz'.format('male')
num_betas = 10   # body parameters
model_type = 'smplh'

bm = load_body_model(bm_path, num_betas, model_type)
motion = amass.load(file, bm, bm_path)
bvh.save(motion, filename='./smplData/output.bvh')
本文内容由网友自发贡献,版权归原作者所有,本站不承担相应法律责任。如您发现有涉嫌抄袭侵权的内容,请联系:hwhale#tublm.com(使用前将#替换为@)

由frankmocap得到的.pkl文件转为.bvh或者.fbx 的相关文章

  • 将dwg文件转为shp文件

    将dwg文件转为shp文件 xff08 包括dwg的注记转换 xff09 目的 xff1a 利用ArcGIS软件 将dwg文件中的面状要素转为shp格式 xff0c 并将dwg中的注记转换成shp文件中的字段 方法一 xff1a 1 将dw
  • frankmocap

    1 配置和报错 1 1 模块缺失 ModuleNotFoundError No module named detectors body pose estimator pose2d models 下载问题 xff0c 下载完之后该文件夹下面的
  • ubuntu20.04配置FrankMocap实现3D人体姿态估计

    一 初始环境配置 1 ubuntu20 04配置显卡驱动 以我的这篇文章为例子 xff0c 显卡RTX2060及以下的都可以使用我的方法快速完成配置 xff0c RTX2060以上的我尚未进行尝试 xff0c 请自行斟酌尝试 联想拯救者R7
  • Unity中BVH骨骼动画驱动的可视化理论与实现

    前言 找了很久使用BVH到unity中驱动骨骼动画的代码 xff0c 但是都不是特别好用 xff0c 自己以前写过 xff0c 原理很简单 xff0c 这里记录一下 理论 初始姿态 在BVH或者其它骨骼动画中 xff0c 一般涉及到三种姿势
  • FrankMocap:A Monocular 3D Whole-Body Pose Estimation System via Regression and Integration 2021阅读理解

    ICCV Workshop 2021 Facebook AI research Btw why the name FrankMocap Our pipeline to integrate body and hand modules remi
  • FrankMocap

    FrankMocap 摘要介绍相关工作3D参数化人体模型单图像3D人体姿势估计单图像3D手姿势估计身体和手的联合3D姿势估计 方法SMPL X模型概述3D手估计模块手模块结构训练方法数据集预处理训练数据增强 3D身体估计模块整个身体集成模块
  • FrankMocap Fast monocular 3D Hand and Body Motion Capture by Regression and Intergretion

    paper title FrankMocap Fast monocular 3D Hand and Body Motion Capture by Regression and Intergretion paper link https ar
  • tif文件转为shp文件_arcgis中tif转成shp

    MapGIS转换Arcgis图解 11页 1下载券 MapGIS转shp文件攻略 2页 2下载券 MapGIS完美转ArcGIS Sha 11页 1下载券 mapgis转换成SHP格式 3页 免费 Mapgi ArcGIS 中 80 转 2
  • three.js加载3D模型(glb/gltf/fbx)

    three js加载3D模型 glb gltf fbx 一 理解three 1 一个可以在某个3D建模软件打开的东西 xff0c 通过某种方案在浏览器中打开 xff1b 2 不要试图手动去创建3D图形 xff0c 当然比较闲的话可以这样操作
  • pkl文件读取和过滤等处理(未写)

    python自带的数据格式 经常会遇到 介绍如何处理
  • FBXSDK2018 plugin for Unity

    1 下载FBXSDK 点击打开链接 2 安装SDK 记住你所安装的目录 3 visualstudio 新建 C 空项目 首先配置 C C 附加包含目录 你安装sdk 路径下的include 4 设置预处理器 假设是Debug x64 WIN
  • FbxSDK官网文档阅读总结

    FbxSDK官网文档地址 传送门 原文 Normally an FBX application needs only one SDK manager object Most FBX applications need only one sc
  • FbxSDK使用总结

    Fbx文件结构太复杂 FbxSDK太难理解 Fbx官网文档功能介绍太不清晰 FbxSDK中的示例程序太冷门 太不解决问题 网络上找不到能够解决我的痛点的方法 有相同烦恼的不只我一个人 一个叫 Tianyu Lang 的歪果仁也发出抱怨 并怒
  • 在 OpenGL 中导入和显示 .fbx 文件

    我一直在尝试使用以下命令导入并显示 fbx 文件FBX SDK Untill 我设法加载文件 但我卡在必须显示它的部分 问题 这些指数到底是什么 我应该如何显示顶点 这是我制作的课程 3dModelBasicStructs h struct
  • 在unity 3D中更改Cubical Shower 3d模型的尺寸

    我正在开发一个项目 该项目有一个立方体淋浴作为 3D 模型 它有两个不同的侧面 如前面提到的侧面 1 和侧面 2 另外一侧 1 分为两个屏幕 并且具有玻璃 框架和支架 与一侧 1 的屏幕 2 相同 我想在不改变支撑宽度和框架尺寸的情况下增加
  • Three.js 克隆 FBX 带动画

    我似乎无法克隆 FBX 模型 FBX 从Mixamo https www mixamo com 同时保留动画关键帧 尝试了多种方法 包括使用cloneFbx https gist github com kevincharm bf12a2c6
  • 为什么从搅拌机导出到 Unity 时我的 (FBX) 网格体有孔?

    我现在正在学习雕刻我的角色 当我将 FBX 文件从 Blender 导出到 Unity 时 网格物体的脸上有一个巨大的洞 我该如何预防 解决这个问题 它在 mixamo 中工作得很好 在此输入图像描述 https i stack imgur
  • 带有 QML 的 FbxGeometryLoader

    我想将 fbx 文件导入到我的Scene3D https doc qt io qt 5 11 qml qtdatavisualization scene3d html 通过QMesh https doc qt io qt 5 11 qt3d
  • 如何在Unity中正确导入fbx?

    我已经建模了一个对象并导出为 fbx 在 Unity 中导入资源后 会出现一些伪影 这些文物似乎与 Unity 导入有关 而不是 fbx 文件 因为在 Windo3d 3d viever 中查看的模型似乎没有问题 万分感谢 Blender
  • WP7 XNA 显示 3D FBX 模型

    我只是初学者 很抱歉我的愚蠢问题 我的模型看起来像这样 http img265 imageshack us img265 8291 clipboard01ap jpg http img265 imageshack us img265 829

随机推荐