data analysis & visualization

https://www.raspberrypi.org/downloads/raspbian/

 

Download Raspbian for Raspberry Pi

Raspbian is the Foundation's official supported operating system. You can install it with NOOBS or download the image below.

www.raspberrypi.org

라즈베리파이 설치

  • RASPBIAN STRETCH WITH DESKTOP AND RECOMMENDED SOFTWARE : LibreOffice, Scratch, SonicPi, Thonny, Mathematica 등 포함
  • RASPBIAN STRETCH WITH DESKTOP : Chromium browser, VLC media player, Python 등 포함
  • RASPBIAN STRETCH LITE : DESKTOP GUI 없는 버전

https://www.balena.io/etcher/

 

balena - The complete IoT fleet management platform

Infrastructure and tools to develop, deploy, and manage connected devices at scale. Your first ten devices are always free.

www.balena.io

 

flash 클릭!

 

라즈베리의 초기 id 와 암호는 

id : pi

passwd : raspberry

 

혹시 와이파이가 안되면

Preference>Raspberry Pi Configuration>Localisiation 

에서 GB나 US등도 선택해보기 바람.

 

sudo apt install fonts-nanum fonts-nanum-extra

sudo apt install nabi

sudo apt install im-config

 

혹시 한글이 잘 안되면 아래 링크 참조

https://rpie.tistory.com/1

 

라즈베리파이 라즈비안 OS : 설치 후 한글 폰트 설치 및 한글 입력기 설정 방법

라즈베리파이의 기본 데스크탑 OS인 Raspbian 라즈비안 OS 설치 직후 과정과 한글 폰트 설치 및 한글 입력 사용까지의 간략한 세팅 방법입니다. MicroSD카드에 NOOBS 또는 Raspbian 설치 이미지를 넣는 방법은 라..

rpie.tistory.com

vncserver -geometry 1280x1024

 


CCTV 만들기

 

우선 open cv 를 설치해야 하는데 아래 두 링크를 참고해서 설치하면 된다.

 

https://webnautes.tistory.com/916

 

Raspberry Pi 3에 Extra Module(contrib) 포함하여 OpenCV 4.1.2 설치하는 방법

OpenCV 4.1.2과 opencv_contrib(extra modules)를 컴파일하여 Raspberry Pi 3에 설치하는 방법을 다룹니다. 1. 기존 OpenCV 버전 제거 2. 기존 설치된 패키지 업그레이드 3. OpenCV 컴파일 전 필요한 패키지 설치..

webnautes.tistory.com

https://www.alatortsev.com/2018/09/05/installing-opencv-3-4-3-on-raspberry-pi-3-b/

 

Installing OpenCV 3.4.3 on Raspberry Pi 3 Model B+

Installing OpenCV 3.4.3 on Raspberry Pi 3 model B+, step-by-step.

www.alatortsev.com

아래는 설치에 쓰이는 코드인데 치기 귀찮아서 복사해둔 코드다.

 

cmake -D CMAKE_BUILD_TYPE=RELEASE \
-D CMAKE_INSTALL_PREFIX=/usr/local \
-D WITH_TBB=OFF \
-D WITH_IPP=OFF \
-D WITH_1394=OFF \
-D BUILD_WITH_DEBUG_INFO=OFF \
-D BUILD_DOCS=OFF \
-D INSTALL_C_EXAMPLES=ON \
-D INSTALL_PYTHON_EXAMPLES=ON \
-D BUILD_EXAMPLES=OFF \
-D BUILD_TESTS=OFF \
-D BUILD_PERF_TESTS=OFF \
-D ENABLE_NEON=ON \
-D ENABLE_VFPV3=ON \
-D WITH_QT=OFF \
-D WITH_GTK=ON \
-D WITH_OPENGL=ON \
-D OPENCV_ENABLE_NONFREE=ON \
-D OPENCV_EXTRA_MODULES_PATH=../../opencv_contrib-4.1.2/modules \
-D WITH_V4L=ON \
-D WITH_FFMPEG=ON \
-D WITH_XINE=ON \
-D ENABLE_PRECOMPILED_HEADERS=OFF \
-D BUILD_NEW_PYTHON_SUPPORT=ON \
-D OPENCV_GENERATE_PKGCONFIG=ON ../

 

https://www.youtube.com/watch?v=DIGwweDJCBk

 

https://www.youtube.com/watch?v=WgsZc_wS2qQ

 

 

pip3 install imutils

pip3 install imagezmq

 

client.py

import socket
import time
from imutils.video import VideoStream
import imagezmq

sender = imagezmq.ImageSender(connect_to='tcp://<서버 내부 IP>:5555')

rpi_name = socket.gethostname() # send RPi hostname with each image

picam = VideoStream(usePiCamera=True).start()
time.sleep(2.0)  # allow camera sensor to warm up

while True:  # send images as stream until Ctrl-C
  image = picam.read()
  sender.send_image(rpi_name, image)

 

pip install imagezmq

import imagezmq

 

conda install -c conda-forge imutils

 

server.py

import cv2
import imagezmq

image_hub = imagezmq.ImageHub()

while True:
  rpi_name, image = image_hub.recv_image()
  
  cv2.imshow(rpi_name, image)
  if cv2.waitKey(1) == ord('q'):
    break
  
  image_hub.send_reply(b'OK')

 

아래와 같이 cctv 가 잘 나온다고 하면 이제 사진을 저장해보자.

 

 

 

import cv2
import imagezmq
from time import gmtime, strftime
 
image_hub = imagezmq.ImageHub()

while True:
  rpi_name, image = image_hub.recv_image()

  image2= cv2.resize(image, dsize=(640, 480), interpolation=cv2.INTER_AREA)  
  cv2.imshow(rpi_name, image2)

  imgfile='D:/image/'+strftime("%Y%m%d_%H_%M_%S", gmtime())+'.png'
  cv2.imwrite(imgfile, image2)

  if cv2.waitKey(1) == ord('q'):
    break
  
  image_hub.send_reply(b'OK')


저장한 자료를 바탕으로 관절을 인식하는 코드

import torch
import torchvision
from torchvision import models
import torchvision.transforms as T

import numpy as np
from PIL import Image
import matplotlib.pyplot as plt
from matplotlib.path import Path
import matplotlib.patches as patches

print('pytorch', torch.__version__)
print('torchvision', torchvision.__version__)

IMG_SIZE = 480
THRESHOLD = 0.95


model = models.detection.keypointrcnn_resnet50_fpn(pretrained=True).eval()

import os
d_list='D:/image/'
for ls in os.listdir(d_list):
    print(ls)
    img = Image.open(d_list+ls)
    img = img.resize((IMG_SIZE, int(img.height * IMG_SIZE / img.width)))
    
    plt.figure(figsize=(16, 16))
    #plt.imshow(img)
    
    
    trf = T.Compose([
        T.ToTensor()
    ])
    
    input_img = trf(img)
    out = model([input_img])[0]
    codes = [Path.MOVETO,Path.LINETO,Path.LINETO]    
    fig, ax = plt.subplots(1, figsize=(16, 16))    
    ax.imshow(img)
    
    for box, score, keypoints in zip(out['boxes'], out['scores'], out['keypoints']):
        score = score.detach().numpy()
    
        if score < THRESHOLD:
            continue
    
        box = box.detach().numpy()
        keypoints = keypoints.detach().numpy()[:, :2]
    
        rect = patches.Rectangle((box[0], box[1]), box[2]-box[0], box[3]-box[1], linewidth=2, edgecolor='b', facecolor='none')
        ax.add_patch(rect)
    
        # 17 keypoints
        for k in keypoints:
            circle = patches.Circle((k[0], k[1]), radius=2, facecolor='r')
            ax.add_patch(circle)
        
        # draw path
        # left arm
        path = Path(keypoints[5:10:2], codes)
        line = patches.PathPatch(path, linewidth=2, facecolor='none', edgecolor='r')
        ax.add_patch(line)
        
        # right arm
        path = Path(keypoints[6:11:2], codes)
        line = patches.PathPatch(path, linewidth=2, facecolor='none', edgecolor='r')
        ax.add_patch(line)
        
        # left leg
        path = Path(keypoints[11:16:2], codes)
        line = patches.PathPatch(path, linewidth=2, facecolor='none', edgecolor='r')
        ax.add_patch(line)
        
        # right leg
        path = Path(keypoints[12:17:2], codes)
        line = patches.PathPatch(path, linewidth=2, facecolor='none', edgecolor='r')
        ax.add_patch(line)
    plt.savefig('D:/model_output/'+ls)

 

아래와 같이 나타나면 성공

 

open cv와 결합해서 자동 디텍션은 추후 포스팅하는 걸로 하겠다.