点云生成:基于Paddle2.0实现WGAN-GP在点云上的一些尝试


本文尝试在点云上应用WGAN-GP,判别器借鉴PointNet结构,生成器为自定义搭建。使用ModelNet40数据集,取1024个点训练。定义了FeatureNet、UFeatureNet等网络,通过Adam优化器训练,每2轮可视化生成结果,20轮保存模型,目前可运行但效果待提升。

☞☞☞AI 智能聊天, 问答助手, AI 智能搜索, 免费无限量使用 DeepSeek R1 模型☜☜☜

点云生成:基于paddle2.0实现wgan-gp在点云上的一些尝试 -

点云生成:点云上使用WGAN-GP的一次尝试

项目说明

①说明

    尝试了下在点云上使用了WGAN-GP。

    1、判别器借鉴了PointNet结构。

    2、生成器是随手乱搭的哈哈。

    3、能跑通,效果有待改进。   

Openflow Openflow

一键极速绘图,赋能行业工作流

Openflow 88 查看详情 Openflow

②数据集

    ModelNet总共有662中目标分类,127915个CAD,以及十类标记过方向朝向的数据。其中包含了三个子集:

    1、ModelNet10:十个标记朝向的子集数据;

    2、ModelNet40:40个类别的三维模型;

    3、Aligned40:40类标记的三维模型。

    这里使用了ModelNet40,并且归一化了,文件中的数据的意义:

    1、横轴有六个数字,分别代表:x, y, z, r, g, b;

    2、纵轴为点,每份数据一共有10000个点,项目中每份数据抽取其中1024个点进行训练。

       

In [ ]
!unzip data/data50045/modelnet40_normal_resampled.zip!mv modelnet40_normal_resampled dataset
   

项目主体

①导入需要的库

In [ ]
import osimport numpy as npimport randomfrom mpl_toolkits import mplot3dimport matplotlib.pyplot as pltimport paddleimport paddle.nn.functional as Ffrom paddle.nn import Conv2D, Conv2DTranspose, MaxPool2D, Linear, BatchNorm, Dropout, ReLU, Tanh, LeakyReLU, Sequential
   

②数据处理

1、类别

In [ ]
category = {    'airplane': 0,
}
   

2、生成训练和测试样本的list

In [ ]
def getDatalist(file_path='./dataset/modelnet40_shape_names.txt'):
    f = open(file_path, 'r')
    f_train = open('./dataset/train.txt', 'w')
    f_test = open('./dataset/test.txt', 'w')    for category in f:        if category.split('\n')[0] == 'airplane':
            dict_path = os.path.join('./dataset/', category.split('\n')[0])
            data_dict = os.listdir(dict_path)
            count = 0
            for data_path in data_dict:                if count % 61 != 0:
                    f_train.write(os.path.join(dict_path, data_path) + ' ' + category)                else:
                    f_test.write(os.path.join(dict_path, data_path) + ' ' + category)
                count += 1
    f_train.close()
    f_test.close()
    f.close()if __name__ == '__main__':
    getDatalist()
   

3、数据读取

In [ ]
def pointDataLoader(file_path='./dataset/train.txt', mode='train'):
    BATCHSIZE = 8
    MAX_POINT = 250
    datas = []
    labels = []
    f = open(file_path)    for data_list in f:
        point_data = []
        data_path = data_list.split(' ')[0]
        data_file = open(data_path)
        point_num = 0
        for points in data_file:            if point_num == MAX_POINT:                break
            point_data.append([                float(points.split(',')[0]),                float(points.split(',')[1]),                float(points.split(',')[2])
            ])
            point_num += 1
        datas.append(point_data)
        labels.append(category[data_list.split(' ')[1].split('\n')[0]])
    f.close()
    datas = np.array(datas)
    labels = np.array(labels)

    index_list = list(range(len(datas)))    def pointDataGenerator():
        if mode == 'train':
            random.shuffle(index_list)
        datas_list = []
        labels_list = []        for i in index_list:
            data = np.reshape(datas[i], [1, 250, 3]).astype('float32')
            label = np.reshape(labels[i], [1]).astype('int64')
            datas_list.append(data) 
            labels_list.append(label)            if len(datas_list) == BATCHSIZE:                yield np.array(datas_list), np.array(labels_list)
                datas_list = []
                labels_list = []        if len(datas_list) > 0:            yield np.array(datas_list), np.array(labels_list)    return pointDataGenerator
   

③定义网络

1、定义网络

1.1、提取特征网络

In [ ]
class FeatureNet(paddle.nn.Layer):
    def __init__(self, name_scope='FeatureNet_', num_point=256):
        super(FeatureNet, self).__init__()
        self.input_transform_net = Sequential(
            Conv2D(1, 64, (1, 3)),
            BatchNorm(64),
            ReLU(),
            Conv2D(64, 256, (1, 1)),
            BatchNorm(256),
            ReLU(),
            MaxPool2D((num_point, 1))
        )
        self.input_fc = Sequential(
            Linear(256, 64),
            ReLU(),
            Linear(64, 9, 
                weight_attr=paddle.framework.ParamAttr(initializer=paddle.nn.initializer.Assign(paddle.zeros((64, 9)))),
                bias_attr=paddle.framework.ParamAttr(initializer=paddle.nn.initializer.Assign(paddle.reshape(paddle.eye(3), [-1])))
            )
        )
        self.mlp_1 = Sequential(
            Conv2D(1, 64, (1, 3)),
            BatchNorm(64),
            ReLU(),
            Conv2D(64, 16,(1, 1)),
            BatchNorm(16),
            ReLU(),
        )
        self.feature_transform_net = Sequential(
            Conv2D(16, 16, (1, 1)),
            BatchNorm(16),
            ReLU(),
            MaxPool2D((num_point, 1))
        )
        self.feature_fc = Sequential(
            Linear(16, 8),
            ReLU(),
            Linear(8, 16*16)
        )
        self.mlp_2 = Sequential(
            Conv2D(16, 8, (1, 1)),
            BatchNorm(8),
            ReLU()
        )    
    def forward(self, inputs):
        """
        input: [batchsize, 1, 250, 3]
        output: [batchsize, 250, 1]
        """
        batchsize = inputs.shape[0]

        t_net = self.input_transform_net(inputs)
        t_net = paddle.squeeze(t_net)
        t_net = self.input_fc(t_net)
        t_net = paddle.reshape(t_net, [batchsize, 3, 3])

        x = paddle.squeeze(inputs)
        x = paddle.matmul(x, t_net)
        x = paddle.unsqueeze(x, axis=1)
        x = self.mlp_1(x)

        t_net = self.feature_transform_net(x)
        t_net = paddle.squeeze(t_net)
        t_net = self.feature_fc(t_net)
        t_net = paddle.reshape(t_net, [batchsize, 16, 16])

        x = paddle.squeeze(x)
        x = paddle.transpose(x, (0, 2, 1))
        x = paddle.matmul(x, t_net)
        x = paddle.transpose(x, (0, 2, 1))
        x = paddle.unsqueeze(x, axis=-1)
        x = self.mlp_2(x)
        x = paddle.max(x, axis=2)        return x
   

1.2、生成点云网络

In [ ]
class UFeatureNet(paddle.nn.Layer):
    def __init__(self, name_scope='UFeatureNet_', num_point=1024):
        super(UFeatureNet, self).__init__()
        self.stage_1 = Sequential(
            Conv2DTranspose(1, 4, (1, 3)),
            BatchNorm(4),
            LeakyReLU(),
            Conv2D(4, 16, (1, 1)),
            BatchNorm(16),
            LeakyReLU()
        )
        self.stage_2 = Sequential(
            Conv2DTranspose(16, 32, (4, 4), (2, 1)),
            BatchNorm(32),
            LeakyReLU(),
            Conv2D(32, 128, (1, 1)),
            BatchNorm(128),
            LeakyReLU()
        )
        self.stage_3 = Sequential(
            Conv2DTranspose(128, 128, (4, 4), (2, 1)),
            BatchNorm(128),
            LeakyReLU(),
            Conv2D(128, 64, (1, 3)),
            BatchNorm(64),
            LeakyReLU()
        )
        self.stage_4 = Sequential(
            Conv2DTranspose(64, 32, (4, 1), (1, 1)),
            BatchNorm(32),
            LeakyReLU(),
            Conv2D(32, 32, (1, 3)),
            BatchNorm(32),
            LeakyReLU()
        )
        self.stage_5 = Sequential(
            Conv2DTranspose(32, 16, (2, 1), (1, 1)),
            BatchNorm(16),
            LeakyReLU(),
            Conv2D(16, 1, (1, 3)),
            Tanh()
        )    def forward(self, inputs):
        """
        input: [batchsize, 1, 60, 1]
        output: [batchsize, 1, 250, 3]
        """
        x = self.stage_1(inputs)
        x = self.stage_2(x)
        x = self.stage_3(x)
        x = self.stage_4(x)
        x = self.stage_5(x)        return x
   

1.3、D网络

In [ ]
class D(paddle.nn.Layer):
    def __init__(self, name_scope='D_'):
        super(D, self).__init__()
        self.feature_net = FeatureNet()
        self.fc = Sequential(
            Linear(8, 8),
            ReLU(),
            Dropout(p=0.7),
            Linear(8, 1)
        )    def forward(self, inputs):
        """
        input: [batchsize, 1, 250, 3]
        output: [batchsize, 1]
        """
        x = self.feature_net(inputs)
        x = paddle.squeeze(x)
        x = self.fc(x)        return x
   

1.4、G网络

In [ ]
class G(paddle.nn.Layer):
    def __init__(self, name_scope='G_'):
        super(G, self).__init__()
        self.u_feature_net = UFeatureNet()    def forward(self, inputs):
        """
        input: [batchsize, 1, 60, 1]
        output: [batchsize, 1, 250, 3]
        """
        x = self.u_feature_net(inputs)        return x
   

2、模型结构可视化

In [ ]
Discriminator = D()
paddle.summary(Discriminator, (8, 1, 250, 3))
       
---------------------------------------------------------------------------
 Layer (type)       Input Shape          Output Shape         Param #    
===========================================================================
   Conv2D-1       [[8, 1, 250, 3]]     [8, 64, 250, 1]          256      
  BatchNorm-1    [[8, 64, 250, 1]]     [8, 64, 250, 1]          256      
    ReLU-1       [[8, 64, 250, 1]]     [8, 64, 250, 1]           0       
   Conv2D-2      [[8, 64, 250, 1]]     [8, 256, 250, 1]       16,640     
  BatchNorm-2    [[8, 256, 250, 1]]    [8, 256, 250, 1]        1,024     
    ReLU-2       [[8, 256, 250, 1]]    [8, 256, 250, 1]          0       
  MaxPool2D-1    [[8, 256, 250, 1]]     [8, 256, 1, 1]           0       
   Linear-1          [[8, 256]]            [8, 64]            16,448     
    ReLU-3           [[8, 64]]             [8, 64]               0       
   Linear-2          [[8, 64]]              [8, 9]              585      
   Conv2D-3       [[8, 1, 250, 3]]     [8, 64, 250, 1]          256      
  BatchNorm-3    [[8, 64, 250, 1]]     [8, 64, 250, 1]          256      
    ReLU-4       [[8, 64, 250, 1]]     [8, 64, 250, 1]           0       
   Conv2D-4      [[8, 64, 250, 1]]     [8, 16, 250, 1]         1,040     
  BatchNorm-4    [[8, 16, 250, 1]]     [8, 16, 250, 1]          64       
    ReLU-5       [[8, 16, 250, 1]]     [8, 16, 250, 1]           0       
   Conv2D-5      [[8, 16, 250, 1]]     [8, 16, 250, 1]          272      
  BatchNorm-5    [[8, 16, 250, 1]]     [8, 16, 250, 1]          64       
    ReLU-6       [[8, 16, 250, 1]]     [8, 16, 250, 1]           0       
  MaxPool2D-2    [[8, 16, 250, 1]]      [8, 16, 1, 1]            0       
   Linear-3          [[8, 16]]              [8, 8]              136      
    ReLU-7            [[8, 8]]              [8, 8]               0       
   Linear-4           [[8, 8]]             [8, 256]            2,304     
   Conv2D-6      [[8, 16, 250, 1]]      [8, 8, 250, 1]          136      
  BatchNorm-6     [[8, 8, 250, 1]]      [8, 8, 250, 1]          32       
    ReLU-8        [[8, 8, 250, 1]]      [8, 8, 250, 1]           0       
 FeatureNet-1     [[8, 1, 250, 3]]        [8, 8, 1]              0       
   Linear-5           [[8, 8]]              [8, 8]              72       
    ReLU-9            [[8, 8]]              [8, 8]               0       
   Dropout-1          [[8, 8]]              [8, 8]               0       
   Linear-6           [[8, 8]]              [8, 1]               9       
===========================================================================
Total params: 39,850
Trainable params: 38,154
Non-trainable params: 1,696
---------------------------------------------------------------------------
Input size (MB): 0.02
Forward/backward pass size (MB): 19.45
Params size (MB): 0.15
Estimated Total Size (MB): 19.63
---------------------------------------------------------------------------
       
{'total_params': 39850, 'trainable_params': 38154}
                In [ ]
Generator = G()
paddle.summary(Generator, (8, 1, 60, 1))
       
-----------------------------------------------------------------------------
  Layer (type)        Input Shape          Output Shape         Param #    
=============================================================================
Conv2DTranspose-1   [[8, 1, 60, 1]]       [8, 4, 60, 3]           16       
   BatchNorm-7      [[8, 4, 60, 3]]       [8, 4, 60, 3]           16       
   LeakyReLU-1      [[8, 4, 60, 3]]       [8, 4, 60, 3]            0       
    Conv2D-7        [[8, 4, 60, 3]]       [8, 16, 60, 3]          80       
   BatchNorm-8      [[8, 16, 60, 3]]      [8, 16, 60, 3]          64       
   LeakyReLU-2      [[8, 16, 60, 3]]      [8, 16, 60, 3]           0       
Conv2DTranspose-2   [[8, 16, 60, 3]]     [8, 32, 122, 6]         8,224     
   BatchNorm-9     [[8, 32, 122, 6]]     [8, 32, 122, 6]          128      
   LeakyReLU-3     [[8, 32, 122, 6]]     [8, 32, 122, 6]           0       
    Conv2D-8       [[8, 32, 122, 6]]     [8, 128, 122, 6]        4,224     
  BatchNorm-10     [[8, 128, 122, 6]]    [8, 128, 122, 6]         512      
   LeakyReLU-4     [[8, 128, 122, 6]]    [8, 128, 122, 6]          0       
Conv2DTranspose-3  [[8, 128, 122, 6]]    [8, 128, 246, 9]       262,272    
  BatchNorm-11     [[8, 128, 246, 9]]    [8, 128, 246, 9]         512      
   LeakyReLU-5     [[8, 128, 246, 9]]    [8, 128, 246, 9]          0       
    Conv2D-9       [[8, 128, 246, 9]]    [8, 64, 246, 7]        24,640     
  BatchNorm-12     [[8, 64, 246, 7]]     [8, 64, 246, 7]          256      
   LeakyReLU-6     [[8, 64, 246, 7]]     [8, 64, 246, 7]           0       
Conv2DTranspose-4  [[8, 64, 246, 7]]     [8, 32, 249, 7]         8,224     
  BatchNorm-13     [[8, 32, 249, 7]]     [8, 32, 249, 7]          128      
   LeakyReLU-7     [[8, 32, 249, 7]]     [8, 32, 249, 7]           0       
    Conv2D-10      [[8, 32, 249, 7]]     [8, 32, 249, 5]         3,104     
  BatchNorm-14     [[8, 32, 249, 5]]     [8, 32, 249, 5]          128      
   LeakyReLU-8     [[8, 32, 249, 5]]     [8, 32, 249, 5]           0       
Conv2DTranspose-5  [[8, 32, 249, 5]]     [8, 16, 250, 5]         1,040     
  BatchNorm-15     [[8, 16, 250, 5]]     [8, 16, 250, 5]          64       
   LeakyReLU-9     [[8, 16, 250, 5]]     [8, 16, 250, 5]           0       
    Conv2D-11      [[8, 16, 250, 5]]      [8, 1, 250, 3]          49       
     Tanh-1         [[8, 1, 250, 3]]      [8, 1, 250, 3]           0       
  UFeatureNet-1     [[8, 1, 60, 1]]       [8, 1, 250, 3]           0       
=============================================================================
Total params: 313,681
Trainable params: 311,873
Non-trainable params: 1,808
-----------------------------------------------------------------------------
Input size (MB): 0.00
Forward/backward pass size (MB): 115.48
Params size (MB): 1.20
Estimated Total Size (MB): 116.68
-----------------------------------------------------------------------------
       
{'total_params': 313681, 'trainable_params': 311873}
                In [ ]
def gradient_penalty(discriminator, real, fake, batchsize):
    t = paddle.uniform((batchsize,1,1,1))
    t = paddle.expand_as(t, real)
    inter = t * real +  (1-t) * fake
    inter.stop_gradient = False
    inter_ = discriminator(inter)
    grads = paddle.grad(inter_, [inter])[0]
    grads = paddle.reshape(grads, [batchsize, grads.shape[1], grads.shape[2], grads.shape[3]])
    epsilon = 1e-12
    norm = paddle.sqrt(
        paddle.mean(paddle.square(grads), axis=1) + epsilon
    )
    gp = paddle.mean((norm - 1)**2) * 10

    return gp
   

⑤训练

1、Generator生成图片可视化

In [ ]
def draw(data):
    zdata = []
    xdata = []
    ydata = []    for i in data[0][0]:
        xdata.append(i[0])
        ydata.append(i[1])
        zdata.append(i[2])
    xdata = np.array(xdata)
    ydata = np.array(ydata)
    zdata = np.array(zdata)

    ax = plt.axes(projection='3d')
    ax.scatter3D(xdata, ydata, zdata, c='r')
    plt.s*efig('fake.png')
   

2、训练

In [13]
def train():
    train_loader = pointDataLoader(file_path='./dataset/train.txt', mode='train')
    Discriminator = D()
    Generator = G()

    Discriminator.train()
    Generator.train()

    optim1 = paddle.optimizer.Adam(parameters=Discriminator.parameters(), weight_decay=0.001, learning_rate=1e-5)
    optim2 = paddle.optimizer.Adam(parameters=Generator.parameters(), weight_decay=0.001, learning_rate=1e-5)

    epoch_num = 2000
    train_g = 2
    for epoch in range(epoch_num):        for batch_id, data in enumerate(train_loader()):
            real = paddle.to_tensor(data[0])
            noise = paddle.uniform((real.shape[0], 1, 60, 1))
            fake = Generator(noise)

            fake_loss = paddle.mean(Discriminator(fake))
            real_loss = -1*paddle.mean(Discriminator(real))
            gp = gradient_penalty(Discriminator, real, fake, real.shape[0])
            d_loss = fake_loss + real_loss + gp

            d_loss.backward()
            optim1.step()
            optim1.clear_grad()            for _ in range(train_g):
                noise = paddle.uniform((real.shape[0], 1, 60, 1))
                fake = Generator(noise)
                g_loss = -1*paddle.mean(Discriminator(fake))

                g_loss.backward()
                optim2.step()
                optim2.clear_grad()            if batch_id % 32 == 0: 
                print("epoch: {}, batch_id: {}, d_loss is: {}, g_loss is: {}".format(epoch, batch_id, d_loss.numpy(), g_loss.numpy()))        
        if epoch % 2 == 0:
            noise = paddle.uniform((real.shape[0], 1, 60, 1))
            fake = Generator(noise)
            draw(fake.numpy())        if epoch % 20 == 0:
            paddle.s*e(Discriminator.state_dict(), './model/D.pdparams')
            paddle.s*e(optim1.state_dict(), './model/D.pdopt')
            paddle.s*e(Generator.state_dict(), './model/G.pdparams')
            paddle.s*e(optim2.state_dict(), './model/G.pdopt')if __name__ == '__main__':
    train()
   

以上就是点云生成:基于Paddle2.0实现WGAN-GP在点云上的一些尝试的详细内容,更多请关注其它相关文章!


# 工作流  # 荆门企业营销推广  # 本溪企业网络seo费用  # 惠州网站建设参考书籍  # 沃尔玛网站建设工作内容  # 水果店营销推广牌  # 布吉有效的网站优化  # 广西网站建设在线  # 郑州seo外包阿亮  # 安徽线上推广营销  # seo视频总汇  # 相关文章  # cad  # 使用了  # 纵轴  # 将与  # 手把手  # 生命科学  # 多家  # 中文网  # 进阶  # type  # fig  # ai 


相关栏目: 【 Google疑问12 】 【 Facebook疑问10 】 【 优化推广96088 】 【 技术知识133117 】 【 IDC资讯59369 】 【 网络运营7196 】 【 IT资讯61894


相关推荐: 马斯克:将来机器人比人类多!特斯拉机器人亮相人工智能大会  NVIDIA垄断AI市场90%份额:AMD性能追上80% 软件太不能打  2025“春晖杯”人工智能专场对接活动举办  2025世界人工智能大会(上海)开幕式纪要  工业机器人及非标自动化设备集成服务提供商  美图吴欣鸿:希望更多人用上AI时代的影像生产力工具  J*a与人工智能结合:构建智能云服务  加州用AI监测野火:1032个摄像头联网扫描森林异常  商业智能决策技术助力降本增效,世界人工智能大会举办商业AI高峰论坛  广州团建公司方案 | 绝密飞行 → X-PLANE无人机团建主题团建  前特斯拉总监、OpenAI大牛Karpathy:我被自动驾驶分了心,AI智能体才是未来!  谷歌推出 AI 反洗钱工具,可将金融机构内部风险预警准确率提高2至4倍  网友自制 AI 版《流浪地球 3》预告片,登上 CCTV6  跑不动的元宇宙,虚拟世界比现实更冷酷  Valve 将拒绝采用 AI 生成未知版权内容的游戏上架 Steam  有 ARM 和 X86 两个版本,香橙派游戏掌机细节曝光  XREAL Beam 投屏盒子正式发布:支持“可悬停 AR 空间屏”  华为昇腾AI原生支持30多种基础大模型,包括GPT  探索AI前沿理念 2025全球人工智能技术大会在杭州开幕  Databricks 发布大数据分析平台 Spark 用 AI 模型 SDK:一键生成 SQL 及 FySpark 语言图表代码  宇宙探索下一阶段,机器代替人类,AI会在太空探索中取代人类吗?  解决导航“最后50米”难题 高德地图升级AR步行导航找终点功能  跟着AI大热的“光模块”到底是什么?  传Meta 2025年推出首款AR眼镜,采用军用级别材料,计划生产1000台  人工智能大胆预测:银河系至少有2万个地球,36种外星文明  优傲机器人的人机协作技术 助力中小企发展  电池比 Air 2S 大 20%,大疆 Air 3 无人机现身 FCC  “世界上最像人的机器人”接入 Stable Diffusion ,现场完成作画  英伟达的AI领域垄断地位:一直无法撼动吗?  重磅! 捷通华声灵云AICC荣获第二届光合组织AI解决方案大赛二等奖  "探索Meta发布的Quest MR/VR视频录制与拍摄指南"  研究预测HPC支持的人工智能增长迅速  乐天派桌面机器人加入小米米家生态系统,实现与其他智能设备的互联  Meta 推出 Quest 超级分辨率技术,让 VR 画面更清晰  Goodnotes 6推出,带来多项全新AI功能,让电子笔记更智能  云鲸发布全新的扫拖机器人J4系列  Meta发布音频AI模型,仅需2秒片段模拟真人语音  社区里,孩子们体验“机器人竞技”  不止“文心一言”,消息称百度将推出全新 AI 对话软件“万话”  从GOXR到PartyOn,XRSPACE致力打造多元共赢的元宇宙世界  AYANEO 安卓掌机 Pocket AIR 配置公布:天玑 1200 + 5.5 英寸屏  马斯克发推讽刺人工智能:机器学习的本质就是统计  人工智能创作的“婴儿版超级英雄”,你觉得哪个最可爱  自然语言生成在智能家居设备中的应用  机器人 展才能  人形机器人打开精密齿轮市场全新空间!受益上市公司梳理  泗洪:畅通城市“血管” ,管下机器人来帮忙  微软新出热乎论文:Transformer扩展到10亿token  华为即将推出HarmonyOS 4,再度领先行业的AI技术  静安大宁功能区企业云天励飞亮相2025世界人工智能大会,秀出AI硬实力! 

 2025-07-31

了解您产品搜索量及市场趋势,制定营销计划

同行竞争及网站分析保障您的广告效果

点击免费数据支持

提交您的需求,1小时内享受我们的专业解答。

运城市盐湖区信雨科技有限公司


运城市盐湖区信雨科技有限公司

运城市盐湖区信雨科技有限公司是一家深耕海外推广领域十年的专业服务商,作为谷歌推广与Facebook广告全球合作伙伴,聚焦外贸企业出海痛点,以数字化营销为核心,提供一站式海外营销解决方案。公司凭借十年行业沉淀与平台官方资源加持,打破传统外贸获客壁垒,助力企业高效开拓全球市场,成为中小企业出海的可靠合作伙伴。

 8156699

 13765294890

 8156699@qq.com

Notice

We and selected third parties use cookies or similar technologies for technical purposes and, with your consent, for other purposes as specified in the cookie policy.
You can consent to the use of such technologies by closing this notice, by interacting with any link or button outside of this notice or by continuing to browse otherwise.