Hello, I have an autonomous drones project and I am using **python with the Opencv** library and what I need is to preload an image of a landing point and with the python functionality this must recognize the landing point, the question is that the camera follow that image of the landing point
↧
How to recognise a image 2d preload and follow it by webcam in a rectangle in Python Opencv?
↧
capture camera in background
Hi gays
I want to start capture camera for circle detection but I don't want to see the webcam.
In this code I start to draw trace line of coordinates of circles so I don need to see the original video from webcam or camera.
I try to set invisible= false of ibOriginal but app does not work any more.
I try to open webcame in new form but if you minimize the form app does not work.
please guide me
thanks so much
Sub ProcessImageAndUpdateGUI(sender As Object, arg As EventArgs)
imgOriginal = capWebcam.QueryFrame().ToImage(Of Bgr, Byte) 'get next frame from webcam
If (imgOriginal Is Nothing) Then 'if imgOriginal is null
Return 'bail
End If
imgSmoothed = imgOriginal.PyrDown().PyrUp()
imgSmoothed._SmoothGaussian(3)
imgGrayColorFiltered = imgSmoothed.Convert(Of Gray, Byte)()
Dim grayCannyThreshold2 As Double = 160
Dim grayThreshLinking2 As Double = 80
Dim grayCannyThreshold As Gray = New Gray(160) 'first Canny threshold, used for both circle detection, and line / triangle / rectangle detection
Dim grayCircleAccumThreshold As Gray = New Gray(100) 'second Canny threshold for circle detection, higher number = more selective
Dim grayThreshLinking As Gray = New Gray(80) 'second Canny threshold for line / triangle / rectangle detection
Dim dblAccumRes As Double = 2.0 'resulution of the accumulator used to detect centers of circles
Dim dblMinDistBetweenCircles As Double = imgGrayColorFiltered.Height / 4 'min distance between centers of detected circles
Dim intMinRadius As Integer = 10 'min radius of circles to search for
Dim intMaxRadius As Integer = 50 'max radius of circles to search for
'find circle
Dim circles As CircleF() = imgGrayColorFiltered.HoughCircles(grayCannyThreshold, grayCircleAccumThreshold, dblAccumRes, dblMinDistBetweenCircles, intMinRadius, intMaxRadius)(0)
Dim g As Graphics = ibCircles.CreateGraphics()
For Each circle As CircleF In circles
points.Add(New Point(CInt(circle.Center.X), CInt(circle.Center.Y)))
If points.Count > 1 Then
g.DrawLines(pen, points.ToArray())
End If
If (ckDrawCirclesOnOriginalImage.Checked = True) Then 'if check box is checked
CvInvoke.Circle(imgOriginal, New Point(CInt(circle.Center.X), CInt(circle.Center.Y)), CInt(circle.Radius), New MCvScalar(0, 0, 255), 2)
CvInvoke.Circle(imgOriginal, New Point(CInt(circle.Center.X), CInt(circle.Center.Y)), 3, New MCvScalar(0, 255, 0), -1)
End If
Next
ibOriginal.Image = imgOriginal 'show all 6 images on form
End Sub
↧
↧
Using multithreading in opencv
Hello, I am currently trying to use multithreading in opencv. However, this code is not working.
#include
#include
#include "opencv2/nonfree/features2d.hpp"
#include "FeatureMatching.h"
#include "FaceDetection.h"
#include "AugmentedReality.h"
Mat colorImg;
VideoCapture capture(0);
//Captures image from camera
void doCapture(VideoCapture capture)
{
while (true)
{
capture.read(colorImg);
imshow("HUD", colorImg);
cvWaitKey(1);
}
}
//Deicides what to do
void doDecision()
{
while (true)
{
int k = waitKey(1);
if (k == 49)
{
cout << "Detecting..." << endl;
doMatch(colorImg);
}
else if (k == 50)
{
cout << "Detecting..." << endl;
doDetect(colorImg);
}
else if (k == 51)
{
cout << "Displaying..." << endl;
while (waitKey(1) != 51)
{
doAR(capture);
}
}
else if (k == 27)
{
cout << "Stop";
}
}
}
//Main function
int main()
{
thread t1(doCapture, capture);
thread t2(doDecision);
//cvNamedWindow("HUD", 0);
//cvSetWindowProperty("HUD", CV_WND_PROP_FULLSCREEN, CV_WINDOW_FULLSCREEN);
initMatch();
initFace();
initAR();
t1.join();
t2.join();
}
If anyone needs to see more of my code, I'll will do so. Can someone please tell what's the problem? Thanks in advance.
↧
How to create "list" in 3.0?
I intend to use multithreading to capture and process the video from my webcam, so I difine a global list to store frames. In 2.4 `listframelist;`is available. However, the same code cannot be compiled in 3.0. How should I create `list` in 3.0?
Thanks!
↧
SIFT Feature matches coordinates
Hello,
I want to print the detect features keypoints with the FLANN based Matcher
Algorithm : http://docs.opencv.org/trunk/dc/dc3/tutorial_py_matcher.html.
The search works fine and show as the tutorial the keypoints in red(all) and green(good).
I want only print the coordinates(x,y) named here ‘kp2’ of the second image(scene) but it doesn’t work.
Here is my code :
import numpy as np
import cv2
from matplotlib import pyplot as plt
img1 = cv2.imread('img1.jpg',0) # queryImage
img2 = cv2.imread('img2.jpg',0) # trainImage
# Initiate SIFT detector
sift = cv2.xfeatures2d.SIFT_create()
# find the keypoints and descriptors with SIFT
kp1, des1 = sift.detectAndCompute(img1,None)
kp2, des2 = sift.detectAndCompute(img2,None)
# FLANN parameters
FLANN_INDEX_KDTREE = 1
index_params = dict(algorithm = FLANN_INDEX_KDTREE, trees = 5)
search_params = dict(checks=50) # or pass empty dictionary
flann = cv2.FlannBasedMatcher(index_params,search_params)
matches = flann.knnMatch(des1,des2,k=2)
# Need to draw only good matches, so create a mask
matchesMask = [[0,0] for i in range(len(matches))]
# ratio test as per Lowe's paper
for i,(m,n) in enumerate(matches):
if m.distance < 0.7*n.distance:
matchesMask[i]=[1,0]
print(i,kp2[i].pt)
draw_params = dict(matchColor = (0,255,0),
singlePointColor = (255,0,0),
matchesMask = matchesMask,
flags = 0)
img3 = cv2.drawMatchesKnn(img1,kp1,img2,kp2,matches,None,**draw_params)
plt.imshow(img3,),plt.show()
My result :
77 (67.68722534179688, 92.98455047607422)
82 (14.395119667053223, 93.1697998046875)
86 (127.58460235595703, 98.1304931640625)
109 (66.52041625976562, 111.51738739013672)
110 (66.52041625976562, 111.51738739013672)
146 (69.3978500366211, 11.287369728088379)
The number of match keypoints is good but the coordinates are wrong **print(i,kp2[i].pt)**. I checked with the original image.
What I did wrong and if yes which lines I have to put to print only the match keypoints coordinates.
Thanks for all.
↧
↧
Cascade Classifier - detectmultiscale does not return MatOfRect objects
I work on my android application as It mentioned [here](http://answers.opencv.org/question/173490/how-to-recognize-specific-objects/) . I have difficulties with class Cascade Classifier. I use HAAR classifier. I load a trained cascade classifier in xml and then I use method detectMultiScale with parameters (Mat image, MatOfRect detectedObjects). But when I run the application, I always get detected 0 objects, even If I take a photo with an object in the image. I use Opencv 3.2.0. I tried also LBP classifier and still, it is the same.
I trained my classifier [in this application](http://amin-ahmadi.com/cascade-trainer-gui/ ) . Does someone have the experience with this GUI training classifier? A type of input image is CV_8UC1.
I don't see the problem with loading xml because the statement below is running well, but without returning detected objects as it was mentioned.
I haven't tried to train the classifier in the command line yet. But It would be bit of problem because I am on windows platform. But I'll try it.
Or did I miss something? Let me know.
private CascadeClassifier cascadeClassifier = new CascadeClassifier("/storage/emulated/0/Pics/cascade1.xml");
private MatOfRect matOfRect = new MatOfRect();
public void classifier(Mat object){
if(!cascadeClassifier.empty()) {
Imgproc.resize(object, object, new Size(170, 240));
cascadeClassifier.detectMultiScale(object, matOfRect);
MyLog.i(MyLog.TAG, matOfRect.size().toString());//It always prints size 1x0
}
}
The classifier was trained on this kind of object.

↧
cv::cudacodec::VideoReader unable to Play rtsp stream
**System information**
- OpenCV => 3.3.0
- Operating System / Platform => Ubuntu 16.04, x86_64
- Compiler => gcc version 5.4.1 20160904
- Cuda => 8.0
- Nvidia card => GTX 1080 Ti
- ffmpeg details
- libavutil 55. 74.100 / 55. 74.100
- libavcodec 57.103.100 / 57.103.100
- libavformat 57. 77.100 / 57. 77.100
- libavdevice 57. 7.101 / 57. 7.101
- libavfilter 6.100.100 / 6.100.100
- libswscale 4. 7.103 / 4. 7.103
- libswresample 2. 8.100 / 2. 8.100
**Detailed description**
i am trying to play a rtsp stream using `cudacodec::VideoReader`
###### Rtsp Stream Details ( from vlc )

this stream plays fine in vlc and `cv::VideoCapture` but when i try to play it in `cudacodec::VideoReader` i get a **error** saying:
`OpenCV Error: Gpu API call (CUDA_ERROR_FILE_NOT_FOUND [Code = 301]) in CuvidVideoSource, file /home/deep/Development/libraries/opencv/opencv/modules/cudacodec/src/cuvid_video_source.cpp, line 66`
`OpenCV Error: Assertion failed (init_MediaStream_FFMPEG()) in FFmpegVideoSource, file /home/deep/Development/libraries/opencv/opencv/modules/cudacodec/src/ffmpeg_video_source.cpp, line 101`
**Steps to reproduce**
#include
#include "opencv2/opencv_modules.hpp"
#if defined(HAVE_OPENCV_CUDACODEC)
#include
#include
#include
int main(int argc, const char* argv[])
{
const std::string fname = "rtsp://admin:admin@192.168.1.13/media/video2";
cv::namedWindow("GPU", cv::WINDOW_NORMAL);
cv::cuda::GpuMat d_frame;
cv::Ptr d_reader = cv::cudacodec::createVideoReader(fname);
for (;;)
{
if (!d_reader->nextFrame(d_frame))
break;
cv::Mat frame;
d_frame.download(frame);
cv::imshow("GPU", frame);
if (cv::waitKey(3) > 0)
break;
}
return 0;
}
#else
int main()
{
std::cout << "OpenCV was built without CUDA Video decoding support\n" << std::endl;
return 0;
}
#endif
I tried debugging it using GDB and saw that in `ffmpeg_video_source.cpp` `bool init_MediaStream_FFMPEG()` directly returns without checking the if condition.
**GDB output**
cv::cudacodec::detail::FFmpegVideoSource::FFmpegVideoSource
(this=0x402a20 <_start>, fname=...) at /home/deep/Development/libraries/opencv/opencv/modules/cudacodec/src/ffmpeg_video_source.cpp:98
98 cv::cudacodec::detail::FFmpegVideoSource::FFmpegVideoSource(const String& fname) :
(gdb) n
99 stream_(0)
(gdb) n
101 CV_Assert( init_MediaStream_FFMPEG() );
(gdb) s
(anonymous namespace)::init_MediaStream_FFMPEG () at /home/deep/Development/libraries/opencv/opencv/modules/cudacodec/src/ffmpeg_video_source.cpp:94
94 return initialized;
(gdb) display initialized
4: initialized = false
(gdb) s
95 }
↧
How to use JavaCameraView in portrait mode?
Hello.
How do I use the JavaCameraView from OpenCV in portrait mode?
I found lots of [solutions](https://stackoverflow.com/a/30453132/963319) but they end of not working when I need to use the
FeatureDetector detector = FeatureDetector.create(FeatureDetector.MSER);
for example. Like [others have noted](https://stackoverflow.com/questions/14816166/rotate-camera-preview-to-portrait-android-opencv-camera/30453132#comment60413463_30453132), It presumes that the picture is in landscape mode when I pass it the
detector.detect(mGrey, keypoints);
I tried all kind of trickery like doing:
May copy = new Mat();
Code.flip(mGrey.t(), copy, 1);
detector.detect(mGrey, keypoints);
It doesn't seem to work though.
I think I need to do - with the nGrey matrix - the inverse of (from the [example above](https://stackoverflow.com/a/30453132/963319)) :
private Matrix rotateMe(Canvas canvas, Bitmap bm) {
// TODO Auto-generated method stub
Matrix mtx=new Matrix();
float scale = (float) canvas.getWidth() / (float) bm.getHeight();
mtx.preTranslate((canvas.getWidth() - bm.getWidth())/2, (canvas.getHeight() - bm.getHeight())/2);
mtx.postRotate(90,canvas.getWidth()/2, canvas.getHeight()/2);
mtx.postScale(scale, scale, canvas.getWidth()/2 , canvas.getHeight()/2 );
return mtx;
}
If anyone know how to overcome this problem - by hook or by crook - let me know.
To re-iterate, how do I correctly show the `JavaCameraView` in landscape mode? Correctly should mean that the `FeatureDetector` should be able to process the image, `mGrey` matrix, correctly. Right now, the `FeatureDetector` views the image in landscape mode even if I hold the phone in portrait mode. So when it selects text areas in the image, it selects them vertically and incorrectly. When I flip the phone in landscape mode, it select everything correctly. I much prefer portrait mode for my app.
Thank you kindly in advance.
↧
Windows has triggered breakpoint in test.exe
This may be due to a corruption of heap, which indicates a bug in file or any of the DLL's it has loaded. I am using the github repository: https://github.com/puku0x/cvdrone/tree/master/samples to test sample codes and I get this error for every program except for the first program when it was built.(test.exe). I am using VS 2010. Please any help would be appreciated.
↧
↧
Problems with Augmented Reality
Hello, I am trying to create a program which uses augmented reality. However, I am currently stuck on drawing the object. This is my code so far:
#include "opencv2/imgproc/imgproc.hpp"
#include "opencv2/highgui/highgui.hpp"
#include "opencv2/calib3d/calib3d.hpp"
using namespace std;
using namespace cv;
Mat camMat;
Mat distortion;
//Initializes augmented reality
void initAR()
{
FileStorage fs("out_camera_data.xml", FileStorage::READ);
fs["Camera_Matrix"] >> camMat;
fs["Distortion_Coefficients"] >> distortion;
}
//Load 3d model
void loadModel(vector squares, Mat colorImg)
{
vector objectPt = { Point3f(-1, -1, 0), Point3f(-1, 1, 0), Point3f(1, 1, 0), Point3f(1, -1, 0) };
Mat objectMat(objectPt);
Mat rvec;
Mat tvec;
solvePnP(objectMat, squares[0], camMat, distortion, rvec, tvec);
vector line3d[4];
line3d[0] = { { 1, 1, 0 }, { 1, 0, 0 }, { 0, 0, 0 } };
//line3d[1] = { { -1, 0, 0 },{ 1, 0, 1 } };
//line3d[2] = { { 1, 0, 0 },{ -1, 0, 1 } };
//line3d[3] = { { 1, 0, 0 },{ 1, 0, 1 } };
vector line2d[4];
projectPoints(line3d[0], rvec, tvec, camMat, distortion, line2d[0]);
polylines(colorImg, line2d[0][0], true, Scalar(255, 0, 0), 2);
}
//Displays 3d object
void doAR(Mat colorImg)
{
Mat bwImg;
Mat blurImg;
Mat threshImg;
vector> cnts;
if (!colorImg.empty())
{
cvtColor(colorImg, bwImg, CV_BGR2GRAY);
blur(bwImg, blurImg, Size(5, 5));
threshold(blurImg, threshImg, 128.0, 255.0, THRESH_OTSU);
findContours(threshImg, cnts, CV_RETR_LIST, CV_CHAIN_APPROX_NONE);
vector squares;
for (auto contour : cnts)
{
vector approx;
approxPolyDP(contour, approx, arcLength(Mat(contour), true)*0.02, true);
if (approx.size() == 4 && fabs(contourArea(Mat(approx))) > 1000 && isContourConvex(Mat(approx)))
{
Mat square;
Mat(approx).convertTo(square, CV_32FC3);
squares.push_back(square);
}
}
if (squares.size() > 0)
{
loadModel(squares, colorImg);
}
cvNamedWindow("AR", 0);
cvSetWindowProperty("AR", CV_WND_PROP_FULLSCREEN, CV_WINDOW_FULLSCREEN);
imshow("AR", colorImg);
waitKey(1);
}
}
I am trying to draw a triangle with the points in line3d. Can someone please tell me why this isn't drawing? If anyone needs to see more of my code, I will post more on my answer.
Thanks in advance.
↧
Hausdorff Distance Object Detection
I have been struggling trying to implement the outlining algorithm described [here](http://www.cs.cornell.edu/vision/hausdorff/hausmatch.html) and [here](http://cgm.cs.mcgill.ca/~godfried/teaching/cg-projects/98/normand/main.html#applic).
The general idea of the paper is determining the Hausdorff distance of binary images and using it to find the template image from a test image.
For template matching, it is recommended to construct [image pyramids](http://docs.opencv.org/2.4/doc/tutorials/imgproc/pyramids/pyramids.html) along with sliding windows which you'll use to slide over your test image for detection. I was able to do both of these as well.
I am stuck on how to move forward from here on. Do I slide my template over the test image from different pyramid layers? Or is it the test image over the template? And with regards to the sliding window, is/are they meant to be a ROI of the test or template image?
In a nutshell, I have pieces to the puzzle but no idea of which direction to take to solve the puzzle
int distance(vectorconst& image, vectorconst& tempImage)
{
int maxDistance = 0;
for(Point imagePoint: image)
{
int minDistance = numeric_limits::max();
for(Point tempPoint: tempImage)
{
Point diff = imagePoint - tempPoint;
int length = (diff.x * diff.x) + (diff.y * diff.y);
if(length < minDistance) minDistance = length;
if(length == 0) break;
}
maxDistance += minDistance;
}
return maxDistance;
}
double hausdorffDistance(vectorconst& image, vectorconst& tempImage)
{
double maxDistImage = distance(image, tempImage);
double maxDistTemp = distance(tempImage, image);
return sqrt(max(maxDistImage, maxDistTemp));
}
vector buildPyramids(Mat& frame)
{
vector pyramids;
int count = 6;
Mat prevFrame = frame, nextFrame;
while(count > 0)
{
resize(prevFrame, nextFrame, Size(), .85, .85);
prevFrame = nextFrame;
pyramids.push_back(nextFrame);
--count;
}
return pyramids;
}
vector slidingWindows(Mat& image, int stepSize, int width, int height)
{
vector windows;
for(size_t row = 0; row < image.rows; row += stepSize)
{
if((row + height) > image.rows) break;
for(size_t col = 0; col < image.cols; col += stepSize)
{
if((col + width) > image.cols) break;
windows.push_back(Rect(col, row, width, height));
}
}
return windows;
}
↧
Whats the equivalent of put(in java) for opencv c++?
I would like to translate the code below from java to open cv c++. I don't know whats the equivalent for put. Please help on this.
Mat A = new Mat(1, 3, CvType.CV_32FC1);
A.put(0, 0, -1);
A.put(0, 1, 0);
A.put(0, 2, 1);
↧
Best way to convert Mat to an Image from C++ to C#
I want to know the best way to convert an OpenCV Mat to an Image from C++ to C#.
I tried several things without success like vector to byte[] but that didn't work. I tried to return a Bitmap but that also failed so I need you guys' input here.
I'm kinda lost here so I am in need of help.
Thanks.
↧
↧
OpenCV camera calibration error: OpenCV Error: Unknown error code -49 (Input file is empty) in cvOpenFileStorage
I'm trying to calibrate my PICamera with the `camera_calibration.cpp` sample. I have taken 6 images from the sample chessboard. `cmake` and `make` work, but after running camera_calibration got the following error:
./camera_calibration default.xml
This is a camera calibration sample.
Usage: calibration configurationFile
Near the sample file you'll find the configuration file, which has detailed help of how to edit it. It may be any OpenCV supported file format XML/YAML.
OpenCV Error: Unknown error code -49 (Input file is empty) in cvOpenFileStorage, file /tmp/binarydeb/ros-kinetic-opencv3-3.2.0/modules/core/src/persistence.cpp, line 4422
terminate called after throwing an instance of 'cv::Exception'
what(): /tmp/binarydeb/ros-kinetic-opencv3-3.2.0/modules/core/src/persistence.cpp:4422: error: (-49) Input file is empty in function cvOpenFileStorage
Aborted (core dumped)
Google doesn't want to give me an answer this time, hope someone else does. I used this [http://docs.opencv.org/2.4/doc/tutorials/calib3d/camera_calibration/camera_calibration.html](http://docs.opencv.org/2.4/doc/tutorials/calib3d/camera_calibration/camera_calibration.html) guide.
I have all the files, images,... in 1 folder.
default.xml
9 6 27 "CHESSBOARD" "VID5.xml"0 100 26 1 1 1 "out_camera_data.xml" 1 1 1
VID5.xml
57.jpg
80.jpg
115.jpg
149.jpg
194.jpg
↧
build OpenCV into an Axis camera, embedded app
Hello. I need to build OpenCV for an **Axis camera** (embedded app).
My question is how can I install the desired modules into the cam to use my code written in c++ ?
Thanks a lot
↧
How to draw filled contours on the top of another filled contour with different filled color/intensity?
Hi,
I use OpenCV 3.0 and I have a shape A to detect. (See the attachment picture).
My aim is to detect the shape A and draw the contour with filled contour using the same intensity for each contour (1 dimension matrix only). The idea is to give the inner circles higher intensity over the outer as shown by C on the attachment.
I can detected the circles using thresholding and contour detection. However, when I loop on the detected contour to draw the contours, the result is one contour with the same intensity only as shown by B on the attachment picture.
it seems like the looping is not adding the intensity on each iteration.
can someone help me out ??
Thanks.

↧
improve my findContours for a square detection?
Hi guys,
for a robotic project I want to detect a black square in real time (Resolution: 720p) using ROS and OpenCV. I'm working with such a high resolution because I used VGA first and then figured out that findContours() is finding contours > 3 meters distance quite better! With VGA I can only detect the marker with a max distance of 3 meters. With 720p I am able to detect it up to 6 meters and 'some' more.
I used python code to find the contours and did some evaluating next. Here the problems occur. Due to the dynamic changes of the background while driving the robot, it's hard to detect the square clearly. I get long lines, especially from shadows and so on.
Please accept that there are no unsharp images or so. I sorted them already, so OpenCV cal deal with nice InputImages.
Here I tried to remove some contours...
image = cv2_img
gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
gray = cv2.GaussianBlur(gray, (5,5), 0)
edges = cv2.Canny(gray, 60, 255)
cnts, hierarchy = cv2.findContours(edges.copy(), cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
contours = sorted(cnts, key=cv2.contourArea, reverse=True)[:10]
for contour in contours:
area = cv2.contourArea(contour)
if area > 100000 and area < 1000:
contours.remove(contour)
perimeter = cv2.arcLength(contour, True)
approx = cv2.approxPolyDP(contour, 0.01*perimeter, True)
if len(approx) == 4:
cv2.circle(cv2_img, (720, 360), 5, (255,0,0), 5)
cv2.drawContours(cv2_img, [approx], -1, (0, 255, 0), 2)
M = cv2.moments(approx)
centers = []
if M["m00"] != 0:
self.cX = int(M["m10"] / M["m00"])
self.cY = int(M["m01"] / M["m00"])
else:
self.cX, self.cY = 0, 0
P1 = approx[0]
P1x = P1[0][0]
P1y = P1[0][1]
P2 = approx[1]
P2x = P2[0][0]
P2y = P2[0][1]
P3 = approx[2]
P3x = P3[0][0]
P3y = P3[0][1]
P4 = approx[3]
P4x = P4[0][0]
P4y = P4[0][1]
cv2.circle(cv2_img, (P1x, P1y), 1, (50,0,255), 4) # left top corner
cv2.circle(cv2_img, (P2x, P2y), 1, (50,0,255), 4) # bottom left
cv2.circle(cv2_img, (P3x, P3y), 1, (50,0,255), 4) # bottom right
cv2.circle(cv2_img, (P4x, P4y), 1, (50,0,255), 4) # top right
centers.append([self.cX, self.cY])
cv2.circle(cv2_img, (self.cX, self.cY), 2, (255,0,0), 1)
cv2.line(cv2_img, (self.cX, self.cY), (1280/2, 720/2), (255,0,0))
cv2.imshow("Image window", cv2_img)
cv2.waitKey(3)
because contourArea(contour) is about 210000 @ near distance (< 0.5 meters) and < 1000 at long distance I removed small and larg contour areas afore. Also, a kernel size of 5,5 using is better than increasing it to 7,7 (even more small noise and distortions).
please let me know or discuss about the dynamic background and how to deal with the pre-processing in such situations. thanks !
↧
↧
I want to know where exactly the source code cvtColor functions implemented?
I am studying Computer Vision, one of the assignment professor asked me to write your own function to convert the rgb to grayscale. Now I know how to do it in python using opencv. Can you please point out the implementation files for cvtColor in opencv 3.0 version. so that I can take look at how its implemented.
↧
How to get corner points from contours
I extracted contours using opencv findContours. I am looking some method for finding the corners from the extracted contour.
↧
OpenCV Error: Assertion failed (_dst.fixedType()) in cv::convertPointsHomogeneous
Hello,
I am very new to Open CV, using Open CV 3.3.0 with visual studio 2012. While using cv::convertPointsHomogeneous(Thomogeneous, T) function , am getting error message like " Error: Assertion failed (_dst.fixedType()) in cv::convertPointsHomogeneous ", Could please any one help me !
Thanks in advance.
↧