라즈베리파이 openCV 설치 및 관절 인식
https://www.raspberrypi.org/downloads/raspbian/
라즈베리파이 설치
- RASPBIAN STRETCH WITH DESKTOP AND RECOMMENDED SOFTWARE : LibreOffice, Scratch, SonicPi, Thonny, Mathematica 등 포함
- RASPBIAN STRETCH WITH DESKTOP : Chromium browser, VLC media player, Python 등 포함
- RASPBIAN STRETCH LITE : DESKTOP GUI 없는 버전
flash 클릭!
라즈베리의 초기 id 와 암호는
id : pi
passwd : raspberry
혹시 와이파이가 안되면
Preference>Raspberry Pi Configuration>Localisiation
에서 GB나 US등도 선택해보기 바람.
sudo apt install fonts-nanum fonts-nanum-extra
sudo apt install nabi
sudo apt install im-config
혹시 한글이 잘 안되면 아래 링크 참조
vncserver -geometry 1280x1024
CCTV 만들기
우선 open cv 를 설치해야 하는데 아래 두 링크를 참고해서 설치하면 된다.
https://webnautes.tistory.com/916
https://www.alatortsev.com/2018/09/05/installing-opencv-3-4-3-on-raspberry-pi-3-b/
아래는 설치에 쓰이는 코드인데 치기 귀찮아서 복사해둔 코드다.
cmake -D CMAKE_BUILD_TYPE=RELEASE \
-D CMAKE_INSTALL_PREFIX=/usr/local \
-D WITH_TBB=OFF \
-D WITH_IPP=OFF \
-D WITH_1394=OFF \
-D BUILD_WITH_DEBUG_INFO=OFF \
-D BUILD_DOCS=OFF \
-D INSTALL_C_EXAMPLES=ON \
-D INSTALL_PYTHON_EXAMPLES=ON \
-D BUILD_EXAMPLES=OFF \
-D BUILD_TESTS=OFF \
-D BUILD_PERF_TESTS=OFF \
-D ENABLE_NEON=ON \
-D ENABLE_VFPV3=ON \
-D WITH_QT=OFF \
-D WITH_GTK=ON \
-D WITH_OPENGL=ON \
-D OPENCV_ENABLE_NONFREE=ON \
-D OPENCV_EXTRA_MODULES_PATH=../../opencv_contrib-4.1.2/modules \
-D WITH_V4L=ON \
-D WITH_FFMPEG=ON \
-D WITH_XINE=ON \
-D ENABLE_PRECOMPILED_HEADERS=OFF \
-D BUILD_NEW_PYTHON_SUPPORT=ON \
-D OPENCV_GENERATE_PKGCONFIG=ON ../
https://www.youtube.com/watch?v=DIGwweDJCBk
https://www.youtube.com/watch?v=WgsZc_wS2qQ
pip3 install imutils
pip3 install imagezmq
client.py
import socket
import time
from imutils.video import VideoStream
import imagezmq
sender = imagezmq.ImageSender(connect_to='tcp://<서버 내부 IP>:5555')
rpi_name = socket.gethostname() # send RPi hostname with each image
picam = VideoStream(usePiCamera=True).start()
time.sleep(2.0) # allow camera sensor to warm up
while True: # send images as stream until Ctrl-C
image = picam.read()
sender.send_image(rpi_name, image)
pip install imagezmq
import imagezmq
conda install -c conda-forge imutils
server.py
import cv2
import imagezmq
image_hub = imagezmq.ImageHub()
while True:
rpi_name, image = image_hub.recv_image()
cv2.imshow(rpi_name, image)
if cv2.waitKey(1) == ord('q'):
break
image_hub.send_reply(b'OK')
아래와 같이 cctv 가 잘 나온다고 하면 이제 사진을 저장해보자.
import cv2
import imagezmq
from time import gmtime, strftime
image_hub = imagezmq.ImageHub()
while True:
rpi_name, image = image_hub.recv_image()
image2= cv2.resize(image, dsize=(640, 480), interpolation=cv2.INTER_AREA)
cv2.imshow(rpi_name, image2)
imgfile='D:/image/'+strftime("%Y%m%d_%H_%M_%S", gmtime())+'.png'
cv2.imwrite(imgfile, image2)
if cv2.waitKey(1) == ord('q'):
break
image_hub.send_reply(b'OK')
저장한 자료를 바탕으로 관절을 인식하는 코드
import torch
import torchvision
from torchvision import models
import torchvision.transforms as T
import numpy as np
from PIL import Image
import matplotlib.pyplot as plt
from matplotlib.path import Path
import matplotlib.patches as patches
print('pytorch', torch.__version__)
print('torchvision', torchvision.__version__)
IMG_SIZE = 480
THRESHOLD = 0.95
model = models.detection.keypointrcnn_resnet50_fpn(pretrained=True).eval()
import os
d_list='D:/image/'
for ls in os.listdir(d_list):
print(ls)
img = Image.open(d_list+ls)
img = img.resize((IMG_SIZE, int(img.height * IMG_SIZE / img.width)))
plt.figure(figsize=(16, 16))
#plt.imshow(img)
trf = T.Compose([
T.ToTensor()
])
input_img = trf(img)
out = model([input_img])[0]
codes = [Path.MOVETO,Path.LINETO,Path.LINETO]
fig, ax = plt.subplots(1, figsize=(16, 16))
ax.imshow(img)
for box, score, keypoints in zip(out['boxes'], out['scores'], out['keypoints']):
score = score.detach().numpy()
if score < THRESHOLD:
continue
box = box.detach().numpy()
keypoints = keypoints.detach().numpy()[:, :2]
rect = patches.Rectangle((box[0], box[1]), box[2]-box[0], box[3]-box[1], linewidth=2, edgecolor='b', facecolor='none')
ax.add_patch(rect)
# 17 keypoints
for k in keypoints:
circle = patches.Circle((k[0], k[1]), radius=2, facecolor='r')
ax.add_patch(circle)
# draw path
# left arm
path = Path(keypoints[5:10:2], codes)
line = patches.PathPatch(path, linewidth=2, facecolor='none', edgecolor='r')
ax.add_patch(line)
# right arm
path = Path(keypoints[6:11:2], codes)
line = patches.PathPatch(path, linewidth=2, facecolor='none', edgecolor='r')
ax.add_patch(line)
# left leg
path = Path(keypoints[11:16:2], codes)
line = patches.PathPatch(path, linewidth=2, facecolor='none', edgecolor='r')
ax.add_patch(line)
# right leg
path = Path(keypoints[12:17:2], codes)
line = patches.PathPatch(path, linewidth=2, facecolor='none', edgecolor='r')
ax.add_patch(line)
plt.savefig('D:/model_output/'+ls)
아래와 같이 나타나면 성공
open cv와 결합해서 자동 디텍션은 추후 포스팅하는 걸로 하겠다.
'ubuntu' 카테고리의 다른 글
라즈베리파이 selenium 사용법 (0) | 2020.03.23 |
---|---|
nas 마운트하기 (0) | 2020.03.23 |
jupyter-notebook에서 matplotlib 한글폰트 설정 (0) | 2020.02.13 |
파이썬 Selenium linux 환경 구축하기 (ubuntu) (0) | 2019.05.08 |
우분투 팀뷰어 끊김 현상 (0) | 2019.03.26 |