1.项目做人脸识别,要求录制人脸读数视频后进行上传处理。但是手机上录制的视频非常大,安卓上3s的视频就有5M左右大小。所以尝试做了前端的js压缩处理。一般来说视频压缩是在服务端通过ffmpeg做压缩。但是这个据说对服务器的要求也很高。前端是不好做压缩处理的,但是也不是不可以做,性能不好而已。在github上查了下试了几种前端的压缩组件。最后我试了一个用的比较好用顺手,是轻量级的,适合H5。链接如下https://github.com/ffmpegjs/ffmpeg.js

2.用法github上已经有了。我这里贴一下它examples里这次参考的html 的代码做备份。

<html>
  <head>
    <script src="/dist/ffmpeg.dev.js"></script>
    <style>
      html, body {
        margin: 0;
        width: 100%;
        height: 100%
      }
      body {
        display: flex;
        flex-direction: column;
        align-items: center;
      }
    </style>
  </head>
  <body>
    <h3>Record video from webcam and transcode to mp4 (x264) and play!</h3>
    <div>
      <video id="webcam" width="320px" height="180px"></video>
      <video id="output-video" width="320px" height="180px" controls></video>
    </div>
    <button id="record" disabled>Start Recording</button>
    <p id="message"></p>
    <script>
      const { createWorker } = FFmpeg;
      const worker = createWorker({
        corePath: '../../node_modules/@ffmpeg/core/ffmpeg-core.js',
        logger: ({ message }) => console.log(message),
      });


      const webcam = document.getElementById('webcam');
      const recordBtn = document.getElementById('record');
      const startRecording = () => {
        const rec = new MediaRecorder(webcam.srcObject);
        const chunks = [];
        
        recordBtn.textContent = 'Stop Recording';
        recordBtn.onclick = () => {
          rec.stop();
          recordBtn.textContent = 'Start Recording';
          recordBtn.onclick = startRecording;
        }


        rec.ondataavailable = e => chunks.push(e.data);
        rec.onstop = async () => {
          transcode(new Uint8Array(await (new Blob(chunks)).arrayBuffer()));
        };
        rec.start();
      };


      (async () => {
        webcam.srcObject = await navigator.mediaDevices.getUserMedia({ video: true, audio: true });
        await webcam.play();
        recordBtn.disabled = false;
        recordBtn.onclick = startRecording;
      })();


      const transcode = async (webcamData) => {
        const message = document.getElementById('message');
        const name = 'record.webm';
        message.innerHTML = 'Loading ffmpeg-core.js';
        await worker.load();
        message.innerHTML = 'Start transcoding';
        await worker.write(name, webcamData);
        await worker.transcode(name,  'output.mp4');
        message.innerHTML = 'Complete transcoding';
        const { data } = await worker.read('output.mp4');


        const video = document.getElementById('output-video');
        video.src = URL.createObjectURL(new Blob([data.buffer], { type: 'video/mp4' }));
      }
    </script>
  </body>
</html>

3.下面是我的总结。这个前端的ffmpeg.js首先引用有两种,如果你是js直接写的H5,可以通过下面的方式直接引入

<script src="https://unpkg.com/@ffmpeg/ffmpeg@0.6.1/dist/ffmpeg.min.js"></script>
const { createWorker } = FFmpeg;

如果是ES的方式,需要通过npm安装

npm install @ffmpeg/ffmpeg

const { createWorker } = require('@ffmpeg/ffmpeg');
const worker = createWorker();

4.那么在我的项目里,前端压缩的时候我其实一开始就是借鉴上面的代码,只用了transcode方法的部分,通过把uni.chooseVideo调取本地摄像头,录制的视频用webm读取然后转为mp4格式,达到压缩目的

const { createWorker } = FFmpeg;
const worker = createWorker();
uni.chooseVideo({
          sourceType: ['camera'],
          camera: 'front',
          success(chooseRes) {
               //chooseRes.tempFilePath为本地录制好的视频blob格式的地址   
                 await worker.load();
                 const name = 'record.webm';
                 await worker.write(name, chooseRes.tempFilePath);
                 await worker.transcode(name,  'output.mp4');
                 const { data } = await worker.read('output.mp4');
                 const src = URL.createObjectURL(new Blob([data.buffer], { type: 'video/mp4' }));
                 self.uploadVideo(chooseRes.tempFilePath)
          }
        })

5.这种方法可以达到效果,但是会有两个新的问题,【1】耗时很长,4s的视频在手机上需要快40s左右进行压缩,时间太久了。微信浏览器录制的视频非常大,都压不动。可见这种方法性能并不好。【2】uni-app的uni-chooseVideo不支持H5方法调用前置摄像头,而这边希望调前置摄像头,用这种方法无法调用前置摄像头。后来采用上面html的方法,进行改造。通过测试发现,读取前置摄像头的数据然后在video里呈现,这样的video src本身就比较小,不需要压缩,这样的方式可以避开录制本地视频非常大,同时能调用H5前置摄像头。在uni-app的Hbuilder中自己写了camera录制视频的组件,代码如下

<template>
    <view class="pageContent">
        <view>
            <video id="webcam" :class="cameraVisible?'recordSrc':'hiddenClass'" muted="true" controls="false" ></video>
        </view>
          <video class="recordSrc playvideo" controls :src="recordBlob" :class="cameraVisible?'hiddenClass':'recordSrc'"></video>
            </div>
            <button class="button"  @click="startRecording" v-if="step==0">开始录制</button>
            <button class="button" @click="stopRecording" v-if="step == 1">停止录像</button>
            <button class="button restart" @click="restart" v-if="step==2">重新录制</button>
            <button class="button upload" @click="uploadVideo" v-if="step==2">上传视频</button>
    </view>
</template>


<script>
import {uploadVideo} from '../../api/global.js'
import {doVideo} from '../../api/smz.js'
    export default {
        data() {
            return {
                 message:"",
                 mediaObject:'',
                 rec:'',
                 chunks:[],
                 recordBlob:'',
                 step:"0"  ,//0:开始录像, 1:录像中,2:停止录像
                 cameraVisible: true, // 显示/隐藏相机
            }
        },
        onLoad() {
            },
        onReady() {
            this.init();
            
         },
        components: {


        },
        methods: {
            async init(){
                    this.videoContext = uni.createVideoContext('webcam');
                    const dom = document.getElementsByClassName("uni-video-video")[0]
                    await navigator.mediaDevices.getUserMedia({ video: true, audio: true }).then(function(media) {
                        console.log('getUserMedia completed successfully.');
                        dom.srcObject = media
                      }).catch(function(error) {
                        console.log(error.name + ": " + error.message);
                        alert(error.name + ": " + error.message)
                      });
                    console.log(dom.srcObject)
                    this.mediaObject = dom.srcObject;
                    await this.videoContext.play();    
                },
                    
            startRecording (){
              // alert("开始录像了")
              this.step = "1"
              this.rec = new MediaRecorder(this.mediaObject);
              
              this.chunks = [];
              this.rec.start();
              // alert("启动录像成功")
            },    
            stopRecording(){
                this.step = "2";
                this.cameraVisible = false;
                this.rec.stop();
                //alert("停止成功")
                this.rec.ondataavailable = e => this.chunks.push(e.data);
                //alert("导数据了")
                this.rec.onstop = async () => {
                    //alert("输出录像blob:"+URL.createObjectURL(new Blob(this.chunks, { type: 'video/mp4' })))
                    console.log(URL.createObjectURL(new Blob(this.chunks, { type: 'video/mp4' })))
                    this.recordBlob = URL.createObjectURL(new Blob(this.chunks, { type: 'video/mp4' }));
                };
            },
            async restart(){
                this.step = "0";
                this.cameraVisible = true;
                console.log(this.mediaObject)
                // await this.videoContext.play();
            },
            uploadVideo(){
                //关闭摄像头
                if (this.mediaObject) {
                    console.log(this.mediaObject);
                   console.log(this.mediaObject.getTracks());
                   this.mediaObject.getTracks()[0].stop();
                   this.mediaObject.getTracks()[1].stop();
                 }
                const self = this
                console.log(self.recordBlob)
               //进行下一步处理。。。。。。
            },
        
          
    }
    }
        
</script >
   
<style scoped>
    uni-page-body{
        height: 100%
    }
    uni-view{
        display:contents;
    }
     html, body {
        margin: 0;
        width: 100%;
        height: 100%
      }
      body {
        display: flex;
        flex-direction: column;
        align-items: center;
      }
      .recordSrc {
          width:100%;
          height:100%;
          position:absolute;
      }
      .playvideo{
          left:0
          
      }
      .button {
              position: absolute;
              bottom: 11%;
              left: 50%;
              margin-left: -50px;
              width: 100px;
              border-radius: 42px;
              background-color: red;
       }
      .restart{
              left:25%;
      }
      .upload{
              left:75%;
      }
      .hiddenClass {
          visibility: hidden;
      }
      
      
</style>

这种方式在我的安卓机上chrome浏览器支持,微信浏览器也支持,但是原生的华为浏览器不支持,IOS不可以用。具体如何在IOS中做到兼容还在探索中。有好方法的朋友欢迎指正。

Logo

华为开发者空间,是为全球开发者打造的专属开发空间,汇聚了华为优质开发资源及工具,致力于让每一位开发者拥有一台云主机,基于华为根生态开发、创新。

更多推荐