システム要件
Linux/Unix
OpenGL 3.3 compatible GPUX window system
非対応環境で起動すると,
$ glxinfo | grep 'version' server glx version string: 1.4 client glx version string: 1.4 GLX version: 1.4 Max core profile version: 0.0 Max compat profile version: 2.1 Max GLES1 profile version: 1.1 Max GLES[23] profile version: 2.0 OpenGL version string: 2.1 Mesa 20.2.6 OpenGL shading language version string: 1.20 OpenGL ES profile version string: OpenGL ES 2.0 Mesa 20.2.6 OpenGL ES profile shading language version string: OpenGL ES GLSL ES 1.0.16
このようなメッセージが出力されて起動しない
info: Initializing OpenGL... error: X Error: GLXBadFBConfig, Major opcode: 152, Minor opcode: 34, Serial: 29 error: Failed to create OpenGL context. error: Failed to create context! error: device_create (GL) failed error: Failed to initialize video. Your GPU may not be supported, or your graphics drivers may need to be updated.
Windows版は26で,Linux, macOSは26.1より標準で仮想カメラ機能が組み込まれた.
Linuxの場合v4l2loopbackを利用.モジュールが読み込まれていたらそれを使い,読み込まれていない場合polkit経由で認証して読み込む.
このプラグイン導入後, 「ツール」 → 「V4L2 Video Output」から出力デバイスを選択して共有できる.
前もってv4l2loopback moduleをexclusive_caps=1付きで読み込んでおく必要がある.
OBS-Studioの標準機能に仮想カメラ機能が入ったけどデバイスが選択できないようなのでその分はこちらにメリットがありそう.
スマートフォン(Android/iOS)をPC(Windows/Linux)のウェブカメラとして利用できるようにするもの.
要専用kernel moduleだったが,v1.6から標準のv4l2loopbackでも動作するようになった
Added support for the regular v4l2loopback module ( Original pr #105 )
https://github.com/dev47apps/droidcam/releases/tag/v1.6
IP cameraとしても使える
オートフォーカス,ズーム,LEDのOn/Offがリモートで可能
v4l2loopback_dc
若しくはv1.6から汎用的な v4l2loopback
も利用可能になった.
v4l2loopback_dc
は install-dkms
で, v4l2loopback
はディストリビューションパッケージから導入できる
v4l2loopback_dc
$ cd droidcam $ sudo ./install-dkms 1920 1080
以下のファイルが作られる.
/etc/modules-load.d/droidcam.conf
モジュール自動ロードvideodev v4l2loopback_dc
/etc/modprobe.d/droidcam.conf
オプションoptions v4l2loopback_dc width=1920 height=1080 video_nr=9 card_label="DroidCam"
video_nr
でデバイス番号を指定できる.この番号はOSB Studio 26.1の仮想カメラの v4l2loopback
より大きな数字にしておくこと.(OBS Studioがv4l2loopback_dcとv4l2loopback両方のうち一番小さなデバイスを掴んでしまうため)
v4l2loopback の場合.
導入
$ sudo apt install v4l2loopback-dkms v4l2loopback-utils
/etc/modules-load.d/v4l2loopback.conf
モジュール自動読み込み設定v4l2loopback
/etc/modprobe.d/v4l2loopback.conf
オプションoptions v4l2loopback exclusive_caps=1 video_nr=6,7,8 card_label="Virtual Camera,v4l2loopback-7,v4l2loopback-8"
v4l2loopback
と v4l2loopback_dc
があるが,デバイスを指定できないので面倒でなければ v4l2loopback_dc
を導入したほうがいいかもしれない.
snd-aloop
モジュールが必要.インストールスクリプトで snd-aloop
の自動読み込みを設定してくれる.
$ ./install-sound
若しくは手動読み込み
$ sudo modprobe -v snd-aloop
Droidcamを実行した状態で以下のコマンドを実行して snd-aloop のデバイスと紐付ける.
$ pacmd load-module module-alsa-source device=hw:Loopback,1,1 source_properties=device.description=droidcam_a udio
$ sudo apt install v4l2loopback-utils v4l2loopback-dkms $ sudo modprobe -v v4l2loopback insmod /lib/modules/5.4.0-4-amd64/updates/dkms/v4l2loopback.ko
devices=n
を指定することで複数のデバイスが作れる
$ ls -la /dev/video* crw-rw----+ 1 root video 81, 0 Mar 21 17:31 /dev/video0 crw-rw----+ 1 root video 81, 1 Mar 21 17:31 /dev/video1 $ sudo modprobe -v v4l2loopback devices=3 insmod /lib/modules/5.4.0-4-amd64/updates/dkms/v4l2loopback.ko devices=3 $ ls -la /dev/video* crw-rw----+ 1 root video 81, 0 Mar 21 17:31 /dev/video0 crw-rw----+ 1 root video 81, 1 Mar 21 17:31 /dev/video1 crw-rw----+ 1 root video 81, 2 Mar 22 22:31 /dev/video2 crw-rw----+ 1 root video 81, 3 Mar 22 22:31 /dev/video3 crw-rw----+ 1 root video 81, 4 Mar 22 22:31 /dev/video4
video_nr=n,n,n...
で指定した番号のデバイスを作成できる
$ ls -la /dev/video* crw-rw----+ 1 root video 81, 0 Mar 21 17:31 /dev/video0 crw-rw----+ 1 root video 81, 1 Mar 21 17:31 /dev/video1 $ sudo modprobe -v v4l2loopback video_nr=3,5,7 insmod /lib/modules/5.4.0-4-amd64/updates/dkms/v4l2loopback.ko video_nr=3,5,7 $ ls -l /dev/video* crw-rw----+ 1 root video 81, 0 Mar 21 17:31 /dev/video0 crw-rw----+ 1 root video 81, 1 Mar 21 17:31 /dev/video1 crw-rw----+ 1 root video 81, 2 Mar 22 22:33 /dev/video3 crw-rw----+ 1 root video 81, 3 Mar 22 22:33 /dev/video5 crw-rw----+ 1 root video 81, 4 Mar 22 22:33 /dev/video7
card_label="hoge"
で名前が付けられる
$ sudo modprobe -v v4l2loopback video_nr=3,4,7 card_label="device number 3","the number four","the last one" insmod /lib/modules/5.4.0-4-amd64/updates/dkms/v4l2loopback.ko video_nr=3,4,7 card_label="device number 3,the number four,the last one" $ v4l2-ctl --list-devices device number 3 (platform:v4l2loopback-000): /dev/video3 the number four (platform:v4l2loopback-001): /dev/video4 the last one (platform:v4l2loopback-002): /dev/video7 Integrated Camera: Integrated C (usb-0000:00:1a.0-1.6): /dev/video0 /dev/video1 /dev/media0
v4l2loopback 0.12.5-1 環境では /etc/modprobe.d/v4l2loopback.conf
に複数カメラを設定して card_label
を指定する場合card枚に “ で囲むと名前がおかしくなる.全体を ” で囲むとOK
NG : card_label="cam 1","cam 2"
OK : card_label="cam 1,cam 2"
$ uname -vm #1 SMP Debian 5.10.4-1 (2020-12-31) x86_64 $ dpkg-query -W v4l2loopback-* v4l2loopback-dkms 0.12.5-1 v4l2loopback-modules v4l2loopback-utils 0.12.5-1
モジュール取り外し
$ sudo modprobe -rv v4l2loopback rmmod v4l2loopback
デバイスを増やしたい場合は恐らく一旦rmmodしてからinsmodし直さないと駄目みたい.
その他のパラメータ
$ /sbin/modinfo v4l2loopback -p debug:debugging level (higher values == more verbose) (int) max_buffers:how many buffers should be allocated (int) max_openers:how many users can open loopback device (int) devices:how many devices should be created (int) video_nr:video device numbers (-1=auto, 0=/dev/video0, etc.) (array of int) card_label:card labels for every device (array of charp) exclusive_caps:whether to announce OUTPUT/CAPTURE capabilities exclusively or not (array of bool) max_width:maximum frame width (int) max_height:maximum frame height (int)
/dev/video9
を作って ffmpeg
で動画を流し込む.$ sudo modprobe -v v4l2loopback video_nr=9 $ ffmpeg -re -i sourcevideo.mp4 -f v4l2 /dev/video9
/dev/video9
を見るとその動画が見える
$ ffplay /dev/video9
$ ffmpeg -f x11grab -s 1920x1080 -i $DISPLAY+0,0 -vf format=pix_fmts=yuv420p -f v4l2 /dev/video9
OBSなどから流し込むのに便利
$ sudo modprobe -v v4l2loopback devices=7 exclusive_caps=1,1,1,1,1,1,1,1 $ ffmpeg -f flv -listen 1 -i rtmp://localhost:1935/live/test -f v4l2 -vcodec rawvideo /dev/video7
低遅延化
$ ffmpeg -an -probesize 32 -analyzeduration 0 -listen 1 -i rtmp://127.0.0.1:1935/live -f v4l2 -vcodec rawvideo /dev/video7
AppImageもあってお手軽. フィルタも多い. 手元の環境では結構重い.
Windowsのみ.昔はWineで動作していたが今は無理. Unity版開発中でそれでマルチプラットホーム展開したいらしい.
FaceRigの後継 Windows版のみ
適当な写真の人物になりすませる. 今の所すごくGPU powerを食う
高画質のリアルタイム仮想背景
PeterL1n/BackgroundMattingV2: Real\-Time High\-Resolution Background Matting
andreyryabtsev/BGMv2\-webcam\-plugin\-linux: A Linux webcam plugin for BGMv2 as used in our demos\.
$ pip install --user webcam-filters $ webcam-filters --help Usage: webcam-filters [OPTIONS] Add video filters to your webcam. Options: --input-dev TEXT Input device (e.g. real webcam at /dev/video0). [required] --input-width INTEGER Preferred width. [default: 1280] --input-height INTEGER Preferred height. [default: 720] --input-framerate FRACTION Preferred framerate specified in fractional format (e.g. '30/1'). [default: 30] --input-media-type TEXT Input media type (e.g. 'image/jpeg' or 'video/x-raw'). If specified ONLY that type is allowed. Otherwise, everything is allowed. Use --list-dev-caps to see all available formats. --output-dev TEXT Output device (e.g. virtual webcam at /dev/video3). [required] --background-blur INTEGER RANGE Background blur intensity. [0<=x<=200] --selfie-segmentation-model [general|landscape] Mediapipe model used for selfie segmentation [default: general] --selfie-segmentation-threshold FLOAT RANGE Selfie segmentation threshold. [default: 0.5;0<=x<=1] --hw-accel-api [off|vaapi] Hardware acceleration API to use. [default: off] --verbose Enable verbose output. --list-dev-caps TEXT List device capabilities (width, height, framerate, etc) and exit. --gst-plugin-path Show Gstreamer plugin path and exit. --install-completion Install auto completion for the current shell and exit. --show-completion Show auto completion for the current shell and exit. --version Show version and exit. --help Show this message and exit.
ソースのWebcamのデバイスが /dev/video0
,出力デバイスが /dev/video8
で,ぼかしを 150(0〜200)に.
$ webcam-filters --input-dev /dev/video0 --output-dev /dev/video8 --background-blur 150 Selected input: media-type=image/jpeg, width=1280 height=720 framerate=30/1 (python3:489391): GStreamer-CRITICAL **: 02:11:49.532: The created element should be floating, this is probably caused by faulty bindings (python3:489391): GStreamer-CRITICAL **: 02:11:49.533: The created element should be floating, this is probably caused by faulty bindings (python3:489391): GStreamer-CRITICAL **: 02:11:49.534: The created element should be floating, this is probably caused by faulty bindings Pipeline: READY Pipeline: PAUSED INFO: Created TensorFlow Lite XNNPACK delegate for CPU. Pipeline: RUNNING Warning from /GstPipeline:pipeline0/GstV4l2Sink:v4l2sink0: A lot of buffers are being dropped.
vlc等で動作確認( $ cvlc v4l:///dev/video8
)
結構CPUを使う.
$ top -b -n1 | head | tail -4 PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 489391 matoken 20 0 1468140 303724 69208 S 229.4 3.8 9:09.51 webcam-filters 3715 matoken 20 0 31104 28340 2920 S 11.8 0.4 54:37.91 tmux: server 495018 matoken 20 0 2528 1896 1628 R 11.8 0.0 0:00.02 byobu-status
vaapiを使ったハードウェアアクセラレーションオプションがあるので試す.
$ webcam-filters --input-dev /dev/video0 --output-dev /dev/video8 --background-blur 150 --hw-accel-api vaapi Selected input: media-type=image/jpeg, width=1280 height=720 framerate=30/1 Traceback (most recent call last): File "/home/matoken/.local/bin/webcam-filters", line 8, in <module> sys.exit(main()) File "/home/matoken/.local/lib/python3.9/site-packages/webcam_filters/__main__.py", line 5, in main cli.main(prog_name="webcam-filters") File "/home/matoken/.local/lib/python3.9/site-packages/click/core.py", line 1062, in main rv = self.invoke(ctx) File "/home/matoken/.local/lib/python3.9/site-packages/click/core.py", line 1404, in invoke return ctx.invoke(self.callback, **ctx.params) File "/home/matoken/.local/lib/python3.9/site-packages/click/core.py", line 763, in invoke return __callback(*args, **kwargs) File "/home/matoken/.local/lib/python3.9/site-packages/webcam_filters/main.py", line 154, in cli add_filters( File "/home/matoken/.local/lib/python3.9/site-packages/webcam_filters/gst.py", line 150, in add_filters rgbconvert = Gst.parse_bin_from_description( gi.repository.GLib.Error: gst_parse_error: no element "vaapipostproc" (1)
多分世代が古いので使えない.0.4.0で細かいオプションが入って使えるようになりそう.
解像度を下げてみる. まずは利用できそうな値を確認する.
$ webcam-filters --list-dev-caps /dev/video0 '/dev/video0' Capabilites ┏━━━━━━━━━━━━━┳━━━━━━━┳━━━━━━━━┳━━━━━━━━━━━┓ ┃ Media Type ┃ Width ┃ Height ┃ Framerate ┃ ┡━━━━━━━━━━━━━╇━━━━━━━╇━━━━━━━━╇━━━━━━━━━━━┩ │ video/x-raw │ 1280 │ 720 │ 9/1 │ │ video/x-raw │ 960 │ 540 │ 10/1 │ │ video/x-raw │ 848 │ 480 │ 10/1 │ │ video/x-raw │ 640 │ 480 │ 30/1 │ │ video/x-raw │ 640 │ 360 │ 30/1 │ │ video/x-raw │ 424 │ 240 │ 30/1 │ │ video/x-raw │ 352 │ 288 │ 30/1 │ │ video/x-raw │ 320 │ 240 │ 30/1 │ │ video/x-raw │ 320 │ 180 │ 30/1 │ │ image/jpeg │ 1280 │ 720 │ 30/1 │ │ image/jpeg │ 960 │ 540 │ 30/1 │ │ image/jpeg │ 848 │ 480 │ 30/1 │ │ image/jpeg │ 640 │ 480 │ 30/1 │ │ image/jpeg │ 640 │ 360 │ 30/1 │ │ image/jpeg │ 424 │ 240 │ 30/1 │ │ image/jpeg │ 352 │ 288 │ 30/1 │ │ image/jpeg │ 320 │ 240 │ 30/1 │ │ image/jpeg │ 320 │ 180 │ 30/1 │ └─────────────┴───────┴────────┴───────────┘
うまく動かない.いくつかの解像度を試してみても駄目.
$ webcam-filters --input-width 320 --input-height 180 --input-framerate 30 --input-dev /dev/video0 --output-dev /dev/video8 --background-blur 150 Selected input: media-type=video/x-raw, width=320 height=180 framerate=30/1 (python3:505620): GStreamer-CRITICAL **: 02:23:16.654: The created element should be floating, this is probably caused by faulty bindings (python3:505620): GStreamer-CRITICAL **: 02:23:16.655: The created element should be floating, this is probably caused by faulty bindings (python3:505620): GStreamer-CRITICAL **: 02:23:16.655: The created element should be floating, this is probably caused by faulty bindings Pipeline: READY Pipeline: PAUSED INFO: Created TensorFlow Lite XNNPACK delegate for CPU. Error from /GstPipeline:pipeline0/GstV4l2Src:v4l2src1: Internal data stream error.
--input-media-type
を image/jpeg
に変更したら動作した. --input-framerate
はいくつに変えても 30 から変わらない?
$ webcam-filters --input-media-type image/jpeg --input-width 320 --input-height 180 --input-framerate 30 --input-dev /dev /video0 --output-dev /dev/video8 --background-blur 150 Selected input: media-type=image/jpeg, width=320 height=180 framerate=30/1 (python3:507940): GStreamer-CRITICAL **: 02:24:55.761: The created element should be floating, this is probably caused by faulty bindings (python3:507940): GStreamer-CRITICAL **: 02:24:55.763: The created element should be floating, this is probably caused by faulty bindings (python3:507940): GStreamer-CRITICAL **: 02:24:55.764: The created element should be floating, this is probably caused by faulty bindings Pipeline: READY Pipeline: PAUSED INFO: Created TensorFlow Lite XNNPACK delegate for CPU. Pipeline: RUNNING Warning from /GstPipeline:pipeline0/GstV4l2Sink:v4l2sink0: A lot of buffers are being dropped.
CPU利用率下がったけどまだ辛い感じ.
$ top -b -n1 | head | tail -4 PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 509393 matoken 20 0 1226080 207328 65588 S 116.7 2.6 1:36.49 webcam-filters 511339 root 20 0 18880 15760 5796 R 83.3 0.2 0:00.30 x2golistsession 2717 matoken 20 0 925376 73276 38676 S 11.1 0.9 97:48.93 Xorg
このときのCPU
$ cat /proc/cpuinfo | grep 'model name' -m1 model name : Intel(R) Core(TM) i5-6200U CPU @ 2.30GHz
コメント