OpenMV M7 Camera

The OpenMV M7 Camera is a small, low-power microcontroller board that allows you to easily implement applications using machine vision in the real world. The best part about the OpenMV is that it is not only capable of image capture, but also face detection, color tracking, QR code reading and plenty more. If you are looking for an economical camera module boasting multiple high-end features, look no further than the OpenMV M7!

The OpenMV can be programmed in high-level Python scripts (courtesy of the MicroPython Operating System) instead of C/C++. This makes it easier to deal with the complex outputs of machine vision algorithms and working with high-level data structures. You still have total control over your OpenMV M7 and its I/O pins in Python. You can easily trigger taking pictures and video on external events or execute machine vision algorithms to figure out how to control your I/O pins.

  • The STM32F765VI ARM Cortex M7 processor running at 216MHz with 512KB of RAM and 2MB of flash. All I/O pins output 3.3V and are 5V tolerant.
  • A full-speed USB (12Mbs) interface to your computer; your OpenMV Cam will appear as a virtual COM port and a USB flash drive when plugged in.
  • A μSD card socket capable of 100Mbs reads/writes, which allows your OpenMV Cam to record video and easily pull machine vision assets off of the μSD card.
  • A SPI Bus that can run up to 54Mbs, allowing you to easily stream image data off the system to either the LCD shield, the WiFi shield or another microcontroller.
  • An I2C Bus, CAN Bus and Asynchronous Serial Bus (TX/RX) for interfacing with other microcontrollers and sensors.
  • A 12-bit ADC and a 12-bit DAC.
  • Three I/O pins for servo control.
  • Interrupts and PWM on all I/O pins (there are 10 I/O pins on the board).
  • RGB LED and two high-power 850nm IR LEDs.
  • The OV7725 image sensor is capable of taking 640x480 8-bit grayscale images or 320x240 16-bit RGB565 images at 30 FPS. Your OpenMV Cam comes with a 2.8mm lens on a standard M12 lens mount. If you want to use more specialized lenses with your OpenMV Cam, you can easily buy and attach them yourself.

OpenMV M7 Camera Product Help and Resources

How to Load MicroPython on a Microcontroller Board

September 4, 2018

This tutorial will show you how to load the MicroPython interpreter onto a variety of development boards.

Core Skill: Programming

If a board needs code or communicates somehow, you're going to need to know how to program or interface with it. The programming skill is all about communication and code.

2 Programming

Skill Level: Rookie - You will need a better fundamental understand of what code is, and how it works. You will be using beginner-level software and development tools like Arduino. You will be dealing directly with code, but numerous examples and libraries are available. Sensors or shields will communicate with serial or TTL.
See all skill levels


Core Skill: Electrical Prototyping

If it requires power, you need to know how much, what all the pins do, and how to hook it up. You may need to reference datasheets, schematics, and know the ins and outs of electronics.

2 Electrical Prototyping

Skill Level: Rookie - You may be required to know a bit more about the component, such as orientation, or how to hook it up, in addition to power requirements. You will need to understand polarized components.
See all skill levels


Comments

Looking for answers to technical questions?

We welcome your comments and suggestions below. However, if you are looking for solutions to technical questions please see our Technical Assistance page.

  • Member #394180 / about 8 years ago / 4

    The OpenMV can be programmed in high-level Python scripts (courtesy of the MicroPython Operating System) instead of C/C++. This makes it easier to deal with the complex outputs of machine vision algorithms and working with high-level data structures.

    But I prefer C/C++. I have tons of existing C/C++ infrastructure that I can reuse. C/C++ runs faster because there's no interpreter overhead.In addition, a decent compiler can optimize my code to run as fast as possible. I get to control memory allocations and my code can be deterministic hard real time. I can put indentations where I want them and curly braces let me know that I've changed scope. Best of all, I can find 90%+ of my errors at compile time, not run time.

    Having Python-only control is a huge bug, not a feature, a terrible blemish on an otherwise interesting product. (Other than that, Mrs. Lincoln, how did you like the play?)

    • NorthernPike / about 8 years ago / 2

      In defense of the M7, I humbly submit this reply... ><> Yes, your right in some aspects. I hear what your saying. However... The speed issue based on compiled versus interpretive isn't an issue for a 216Mhz headless embedded micro. At least not with me. Forced indentation is ok with me. Eliminating the curlies just seems to clean up the code by getting rid of some noise. And it tends to help conformance issues resulting in common coding tactics. Compile time errors are syntax errors. Good coders like you and me don't make a lot of them. So, no biggy here either in my view.

      Now, I will say this. uPOS is not, in my view, an "Operating System". It's more of an "Operating Environment". I like the Arduino world for the simple fact that there is no OS per see. You have Setup() and Loop(). Period. Perfect. The embedded micro does one thing and it does it very well. Just like micro-python where it runs main.py. Period. I have a pyBoard and am learning Python using the uPOS it provides. I am doing this while I wait for my uArm Swift Pro robotic arm to arrive which will be coming with an M7. I got in on the kickstarter which is shipping product now. I'm going to have the M7 talk to the pyBoard which will talk to the Arm to simulate an autonomous entity. The M7 will be its eye, a microphone for an ear, an Emic 2 for a voice, and a whole lotta code for enhanced interaction with the real world. And that's where things get hairy with Python. Ok, 2 + 2 is a very simple "rvalue". But when an rvalue contains Lists of mutables filtered by regular expressions used as part of a comprehension for generating a set of tuples, the glaze begins to form over your eyeballs. How did they put it, "This makes it easier to deal with the complex outputs of machine vision algorithms and working with high-level data structures." Yeah, easy once you get the hang of it. And that's what isn't so easy. Which brings me to another point on the speed issue. When Python iterates through these data-structures, it's using pointers, C++ pointers, and we all know how much faster things execute using pointers. So, again, speed isn't an issue with me.

      A huge bug, naaa, just a different way of doing things. Not a feature, no, of course not. It's the way it is. A terrible blemish? Strange way to describe software. Not sure how to respond to that so I won't.

      In any case, the M7 is a very cool "sensor" for vision. And it's only going to get better. As for uPOS, there is a lot of work going on in this area which will only further its appeal to the Maker World. One last thing. I do want to give SparkFun a kudos for providing this device. Even though they went for my primary source to my tertiary source and pretty much no longer use them to provide me with my parts, I do want them to know they made a good choice here.

      • Kye / about 8 years ago / 2

        Hi NorthernPike,

        This is Kwabena (OpenMV President). I hear what you're saying about Python types being complicated. It's a different world from C. But, it's easy to learn.

        Note that by high level data structure manipulation I mean things this:

        for blob in sorted(raw_blob_list, key = lambda x: x.area(), reverse = True)[0:max_blobs]

        The above python line does a "for" loop on a list of color blob detections that have been sorted by the area of each color blob in descending order on the first "max_blobs" color blob detections. Note that each color blob detection is an object with about 10 fields. I could have sorted the detections based on something else other than area().

        In python you're able to do all of this in one line more or less. In C... well, life is harder. Doing good computer vision work often requires you to filter things, sort things, etc. Python is great for this.

        For more info on the above... here's a link to an OpenMV Cam script that emulates the CMUcam5 Pixy which the above line of code is from:

        https://github.com/openmv/openmv/blob/master/usr/examples/17-Pixy-Emulation/pixy_uart_emulation.py

        • NorthernPike / about 8 years ago * / 1

          Hello Kye, Thank you for the reply. A comment from the President of the company is quite impressive and reinforces my confidence in your product.

          Now, as far as the Python "types" being complicated isn't what I was actually trying to get at. It's the "use" of the types that gets interesting. Just look at your example for instance. You have a "List", an iterator calling a method for use in comparison, and your "slicing" that list to use the first to max_blobs items. At least I think that's what it appears to be doing. Again. it's how these data types are used or "assembled" together to form a really cool yet complicated "rvalue'.

          Now, put that line of code in front of a 12 or 13 year old and see what happens. That's why I will not say Python is easy. I won't lie to those trying to learn it. I've been coding since 1975 from Fortran written on punchcards and run on an IBM 360 to Machine code to Assembly language to Cobol, Pascal, Basic, Lisp, C and C with classes, er C++. It's the "obfuscation" issue made so popular by C.

          The fact that Python can allow one-line statements to be constructed with such power can be concerning when trying to promote the technology. Sometimes it's better to have "easy-to-read" code then compact super-duper fast code. But, your example really isn't clever or unique or even impressive. It's pretty much standard stuff for Python. And that's the crux of my point.

          In any case, good luck with the M7. I look forward to receiving mine soon. And thanks again for the link to the code on GitHub.

          PS - Shouldn't there be a colon ":" at the end of your example statement above? :))

    • Kye / about 8 years ago / 1

      Hi Member #394180,

      This is Kwabena (OpenMV President), you can actually program the OpenMV Cam in C. We have an open-source build system available here: https://github.com/openmv/openmv/wiki/Firmware-Development. No external expensive programmer tools are required. All you need is the OpenMV Cam and a USB cable.

      The above said, our firmware is over 1 MB in size which takes a couple of minutes to program onto the STM32F7 chip. If you are developing code in C you need to be an excellent programmer who doesn't make silly mistakes to get any work done. For beginner students and hobbyist... this isn't the case. Having the python system on-board makes it really easy to get an application that uses computer vision in the real-world working quickly since you're going to have to constantly tweak settings until everything works right. Being able to compile and run in under a second is HUGE.

  • voiceafx / about 8 years ago / 3

    Make one of these with an imager that has a global shutter, and I will love you forever!

  • bennard / about 8 years ago / 2

    Looks neat, and Im sure it has its uses.

    Why would I get this instead of an RPI with a camera? I can get an RPI3+Camera+SD for 65, or an RPIZW+Camera+SD for 45. The RPIs can do some reasonable OpenCV stuff in both C++ or Python. Not great but not horrible either. (the Zero is pretty slow but just fine as a hi-def security camera sending pics to a server). I have both and they work for what im doing.

    • Kye / about 8 years ago / 1

      Hi bennard,

      This is Kwabena (OpenMV President).

      If you know what you're doing with a Raspberry Pi and can sling computer vision code easily along with I/O pin control then there's no need for the OpenMV Cam. But, if you aren't a master you're going to run into a step hill with OpenCV and then trying to get the output from your code to controlling stuff in the real world.

      The whole point of the OpenMV Cam is to make things "easier" - in particular doing robotics. Note that we give you a nice IDE to program the system with that makes your development experience a lot nicer that using a Raspberry PI with OpenCV.

  • Member #658570 / about 8 years ago / 2

    +1 on global shutter request, important for robotics, etc., where the platform may not be stable (moving). Image sensor is capable of 60 fps, might help if M7 can operate it at that speed rather than 30 fps.

    • Kye / about 8 years ago / 1

      Hi Member #658570 ,

      This is Kwabena (OpenMV President).

      The next version of the system will have this in the future. Note that we've found out a way to drive the camera faster to enable higher than 30 FPS for some algorithms.

  • Ken Reiss / about 8 years ago / 2

    Will it do facial recognition (identification of a person in a photo based on a library of faces) of only facial detection (identify if there is a face, and perhaps it's coordinates)?

    • Chiel / about 8 years ago / 2

      It can do basic detection of faces and relay information about it, but i don't believe it can actually identify a person. at least i can't find anything on that.

    • Kye / about 8 years ago / 1

      Hi reissinnovations,

      This is Kwabena (OpenMV President), we can detect faces using a Haar Cascade and then tell you face locations. Facial Recognition is possible in the future. But, we haven't written the code for that.

      That said, you can take pictures with the OpenMV Cam and then transmit said pictures to facial recognition services if you have an internet connection for the OpenMV Cam using our WiFi shield.

  • Very cool product. Will SparkFun create some user guides and example projects using this hardware soon? Hope so :)

  • Member #315892 / about 7 years ago / 1

    Can the camera be run at higher frame rates than 30fps with reduced resolution, bit depth, or a smaller ROI? Or is 30fps a hard limit for some reason?

    Also the OV7725 product page suggests the camera can run at 60fps at full VGA - where does the 30fps frame rate limitation come from on the OpenMV?

  • Member #424527 / about 8 years ago / 1

    Will Sparkfun also stock the lenses and the LCD shield in the future? I need them! (https://openmv.io/collections/products)

  • MRSHKO / about 8 years ago / 1

    pyb.LED(3) turns on the blue led, not the IR. I cannot get the IR LED to turn on with any 1, 2, 3 or 4 parameters. Is the FET dead on my board?

  • Member #559126 / about 8 years ago / 1

    Is it possible and easy to view / stream the video from the camera to somewhere on the internet using a minimum number of external components? Would a simple Arduino or ESP8266 do, or do I need a dedicated computer to do that?

    • Kye / about 8 years ago / 1

      Hi Member #559126,

      This is Kwabena (OpenMV President). Yes, we have a WiFi shield which can get your OpenMV Cam online.

  • Wouldn't $65 spent on a low end Android phone yield a better camera, faster processor and richer choice of programming environments? You would get a USB2 connection fast enough to stream live video and a fast enough system to make it possible to do that AND do various recognition processes locally. Only downside is losing the serial / spi and i2c outputs and the IR capability and since this product doesn't feature a controllable IR cut filter it is a dubious feature at best.

    At some point repurposing mass consumer products becomes more practical than making a more hacking friendly board out of the obsolete parts that small manufacturers can get parts and datasheets for.

    • Kye / about 8 years ago / 1

      Hi John Morris,

      You're totally correct with your above points if you're optimizing for performance and cost. Lots of folks are releasing Android systems like this right now.

      That said, minimal effort is put into making such systems as easy and as fun to use as your first Microcontroller. The whole point of the OpenMV Cam project is to make it "easy" to control things in the real-world using computer vision functions. For the OpenMV Cam, this mean having a hard 250 mA current consumption roof so you can use the system more easily with robotic applications and program it from your computer with one standard USB cable and nothing else. For more on why... see our FAQ here: https://openmv.io/pages/faq

      About the M7 processor - it's about half the speed of a Raspberry PI zero but with a lot less RAM. It can actually do USB 2.0 but we don't enable that feature since it would mean we couldn't have DFU support anymore which makes the OpenMV Cam unbrickable. Given we allow you firmware updates it's important that you can ALWAYS recover the system.

      Note that future OpenMV Cam will have more powerful processors. The idea behind the OpenMV Cam project is to make developing easy by giving you an Arduino like (but with much better tools) programming experience. In particular, one IDE which can be used to control the whole system. Not a lot of different tools bundled together.

  • Member #666505 / about 8 years ago / 1

    You EU peoples had best be circumspect with any monitoring done with this stuff. A relative in the old country hacked some Nikon coolpix cameras to make a generic monitoring system with tracking smarts, then promptly caught some local brats doing bad stuff to her property, then became the subject of an investigation by local LE when the visual evidence was presented.

    On this side of the cod pond, you will need to read your state's code. Most states do not care if you call it a security system, but some states and locales have regulations for monitoring people and collecting and/or storing data on privates.

    As for most image processing, desktop Python is my preference. C or Python may not be the issue - as embedded systems simply do not have the resources that a desktop Linux box or OSX box would provide. And windoze will only provide headaches and anger.

  • jlang / about 8 years ago / 1

    Can this device detect text? For instance, for use to compare license plate data to a database?

    • Member #666505 / about 8 years ago / 2

      Geez, been tons of code for this stuff for years. You people ever try that fancy-pants search engine by them google folks? Your inner Big Brother awaits the wakening.

      Look at openalpr and opencv. But you will not like doing this with micropy.

    • Kye / about 8 years ago / 1

      Hi jlang,

      This is Kwabena (OpenMV President).

      Not yet, but, it's on the todo list for future functionality. We release firmware updates regularly so the feature list expands over time.

Customer Reviews

5 out of 5

Based on 5 ratings:

Currently viewing all customer reviews.

3 of 3 found this helpful:

Excellent product for the machine vision novice

Great product that delivers exactly on its intent at a really affordable price. This is a compact, complete machine vision system that works right out of the box. The example code in the IDE provides a variety of functional object recognition, and is easy to tweak into what you want.

Would love to see built-in text recognition, similar to the included QR, ARtag, and barcode readers.

2 of 2 found this helpful:

This thing is AMAZING!

I was able to easily use the examples provided to get my Parallax ActivityBot to follow a yellow ball, and use it's AprilTag features to set up localization - basically it was like I had miniature indoor GPS for my ActivityBot. I am confident I will, with a bit more effort, get something close to a SLAM implementation. SO COOL!! Very happy with this module!

1 of 1 found this helpful:

Great Camera - and more

I downloaded the software and plugged in the camera and got it working after a few minutes of trial and error. A more detailed step by step tutorial would be helpful. You have to adjust the focus until you actually "see" something. The example programs show you some of the capabilities - which are numerous. The 1st example program (hello world) had some timing statements that gave errors when I tried to upload the program. Got it working after commenting them out.

The code is in python - which is enough like C++ that it isn't a problem. The camera is a "sensor" object and you manipulate its attributes like objects in C++. If you're not familiar with python but understand OOP in C++ you won't have any problems.

Right now I'm just playing around with it to get more experience. It works great.

1 of 1 found this helpful:

It works so easily

Upgraded, Tested a few examples(colors, face recognition...) Everything just great. A few weeks more and I'll try to use the arduino with it.

1 of 1 found this helpful:

Really wonderful and surprisingly powerful

Magnificent platform for vision applications, with spectacularly good software and IDE. Great support from the community. Highly recommended.