Sunday, July 14, 2024

Learning SDL Part II

 In Learning SDL Part I I talked about the 2DLight example from https://glusoft.com/sdl2-tutorials. It might be a bit too advanced for beginner. The examples have external dependencies, and might not easy to be understood without basic concept of SDL. Code might have not been maintained after years. And may due to build tools update or package update, code cannot be built out-of-box. So I decide to start from simple one: https://lazyfoo.net/tutorials/SDL

The first example is "01_hello_SDL". Since it is a simple one, I'm going to try to build it with NMake. A simple Makefile for x64 would be like this:

# Compiler
CC=cl
# Include directory for SDL headers
SDL_INCLUDE=C:\path\to\SDL\include\SDL2
# Library directory for SDL library files
SDL_LIB=C:\path\to\SDL\lib\x64
# Compiler flags, appending /Zi if want to debug
CFLAGS=/I$(SDL_INCLUDE) /Dmain=SDL_main
# Linker flags
LFLAGS=/link /LIBPATH:$(SDL_LIB) SDL2.lib SDL2main.lib SDL2_image.lib Shell32.lib /SUBSYSTEM:CONSOLE

# Target executable name
TARGET=01_hello_SDL.exe
# Source files
SOURCES=01_hello_SDL.cpp

# Rule to make the target
$(TARGET): $(SOURCES)
    $(CC) $(CFLAGS) /Fe:$(TARGET) $(SOURCES) $(LFLAGS)

# Clean target
clean:
    del $(TARGET) *.obj

Note: 1) have to specify /SUBSYSTEM:CONSOLE, otherwise may get entry point not defined error as Win32 app will look for WinMain as entry point.

2)  Have to link to SDL2main.lib and Shell32.lib. SDL2main.lib provides SDL entry function. The Shell32.lib is needed, otherwise, will get error 'LNK2019: unresolved external symbol __imp_CommandLineToArgvW referenced in function main_getcmdline'

3) For some projects need more image format support such as png, would need link to more lib, such as 06_extension_libraries_and_loading_other_image_formats, would need libpng16-16.dll and zlib1.dll which is a dependency of the png dll. If this zlib1 is missing, will only see error complaining "Failed loading libpng16-16.dll" which might be misleading.

4) For projects which needs font, would need link to SDL2_ttf.lib, and needs libfreetype-6.dll at runtime

5) For projects which needs audio, would need link to SDL2_mixer.lib

Run this to setup x64 env: "C:\Program Files (x86)\Microsoft Visual Studio\2022\BuildTools\VC\Auxiliary\Build\vcvars64.bat", then run 'NMake' to build. Can copy the x64 SDL dll to C:\Windows\System32 folder, then won't need to copy them to working folder every time.

6) For projects using OpenGL, OpenGL very likely is already installed on the system. May check whether opengl.dll and glu.dll under \windows\system32. To link the lib, add these two libs:

opengl32.lib glu32.lib

7) For OpenGL Extension Wrangler library, can be downloaded from https://glew.sourceforge.net/

Learning SDL Part I

 

While trying to add more feature to the Tanks game, I realized that I need dive deeper into SDL programming. For this reason, I went to https://glusoft.com/sdl2-tutorials/ and downloaded project files. As I didn't have the Visual Studio installed, but only the build tools, so need some tweak to build the examples. The downloaded package comes with solution (.sln) and project (.vcxproj) files. So the project can be built with command like: msbuild 2DLight.sln /p:Configuration=Release /p:Platform="x86"

Also need to update the vcxproj file with v141=>v143, set WindowsTargetPlatformVersion to 10.0.22621.0 which matches my environment, and set AdditionalIncludeDirectories and AdditionalLibraryDirectories to have my SDL2 header file and lib paths, and need to make sure the setting is in 'x64' session if using x64 as platform in above command line. And for the first 2DLight example, would need download https://github.com/trylock/visibility and SDL2_gfx as well. Note SDL_gfx is not from libsdl.org as other SDL lib. And note, there is SDL2_gfx-1.0.4.tar.gz (.zip) and SDL_gfx-2.0.27.tar.gz, though the 2.0.27 is a newer release (Ver 1.0.4 – 11 Feb 2018 vs Ver 2.0.27 – Sun Dec 10 2023), the later is not compatible with SDL2, and though it comes with a SDL2 patch (also needs some tweak such as needs to link with SDL2.lib instead of SDL.lib, rename the project file as SDL2_gfx.vcxproj), the resulted lib may not work with the example code from sdl2-tutorials.

Need to make similar update for SDL2_gfx.vcxproj. And it does not have x86, but Win32, so build command is:
msbuild SDL2_gfx.sln /p:Configuration=Release /p:Platform="Win32"

May see this error: SDL2_gfx-1.0.4\SDL2_gfxPrimitives_font.h(1559,1): warning C4819: The file contains a character that cannot be represented in the current code page (936). Save the file in Unicode format to prevent data loss

Just remove these characters (all are inline comment) should be OK.

For error  SDL2_gfx-1.0.4\SDL2_gfxPrimitives.c(1771,2): error C2169: 'lrint': intrinsic function, cannot be defined, refer to https://www.ferzkopp.net/wordpress/2016/01/02/sdl_gfx-sdl2_gfx. Just comment out the define/implementation as this is already defined as intrinsic function.

There would be a bunch error for building test if those vcxproj files have not been updated, and can be ignored for now. If all test project files got updated, 3 of the 4 tests should work out-of-box, but for TestImageFilter, would not see any thing displayed or printed to terminal. By checking the code, this Image filter test is using 'printf' to log information, and the test has no graphics display. The 'printf' won't spite out anything either, probably due to SDL hijacked the console I/O. The C code also includes 'windows.h' when 'WIN32' is defined. And in fact, w/ or w/o this header file makes no difference. The way to get some print out is replacing 'printf' in the code with 'SDL_Log' call. With SDL_Log, I got 23 of 27 passed OK.

Now, back to the 2DLight example, all build are OK, but will get runtime error as "The application was unable to start correctly. 0xc00007b". Turns out this is due to mixed using x64 build tools to build x86 dll and executable. And the behavior is very weird. So, just avoid doing that.

Now with x86 build environment, build SDL2_gfx as Win32 target, and use other x86 SDL2 lib/dll, also make sure the image/PNG files are copied to the run folder, no more 0xc00007b, however, seeing an assert for vector subscript out of range. Per Copilot:

The vx.reserve(result.size()) call in the code snippet reserves memory for result.size() elements in the vector vx. However, it does not change the size of the vector. The reserve() function is used to allocate memory in advance to prevent frequent reallocations when adding elements to the vector.

To actually change the size of the vector to match the reserved capacity, you can use the resize() function instead of reserve(). The resize() function not only reserves memory but also sets the size of the vector to the specified value.

After I replaced the two reserve() with resize(), the code mostly works as expected, may still see 'Expression: vector subscript out of range' sometimes, likely usually happened when moving the mouse cursor out of the frame, probably due to missing boundary checking. Updated source code was pushed to forked https://github.com/quyq/2DLight-SDL.

If you are using VSCode as editor like me, you may create task.json like this to allow building the project with MSBuild using shortcut 'ctrl+shift+b' (the top half set build env for x86, the bottom half run MSBuild with the solution file):

{
    "version": "2.0.0",
    "windows": {
      "options": {
        "shell": {
          "executable": "cmd.exe",
          "args": [
            "/C",
            // The path of batch file and platform parameter for setting the build env
            "\"C:/Program Files (x86)/Microsoft Visual Studio/2022/BuildTools/VC/Auxiliary/Build/vcvarsall.bat\"",
            "x86",
            "&&"
          ]
        }
      }
    },
    "tasks": [
      {
        "type": "shell",
        "label": "MSBuild.exe build active file",
        "command": "MSBuild.exe",
        "args": [
            "2DLight.sln",
            "-p:Configuration=Release"
        ],
        "problemMatcher": ["$msCompile"],
        "group": {
          "kind": "build",
          "isDefault": true
        }
      }
    ]
}

 If want to use VSCode built-in debugger, may create launch.json as:

{
    "version": "0.2.0",
    "configurations": [
        {
            "name": "2DLight",
            "type": "cppvsdbg",
            "request": "launch",
            "program": "2DLight.exe",
            "args": [],
            "stopAtEntry": false,
            "cwd": "${workspaceFolder}/Release",
            "environment": [],
            "console": "externalTerminal"
        }
    ]
}

Wednesday, July 10, 2024

Update Tanks (坦克大战) with network support

With Revisit 坦克大战编程, can easily build Tanks on Windows or under WSL. However, with one player the game would be a bit boring and a bit frustrate; and with two player2 on same PC, it would be a bit crowd for four hands on one keyboard. If would be more attractive if two players can play together remotely, or even play against each other, would be much fun. So, we need to bring in network support. Would need to choose a  suitable networking library, (e.g., SDL_net, which complements SDL, or another networking library like Boost.Asio for C++). Need to initialize the networking in the application, setting up one instance as the server and the other as the client.

I haven't been using either network library. So will pick SDL_net since we already have used other library from SDL. SDL_net was moved to github as: https://github.com/libsdl-org/SDL_net. Similar as other SDL libraries, I downloaded SDL2_net-devel-2.2.0-VC.zip.

Now it's time to deep dive to the code to understand the design. The main loop is App::run with state of m_app_state check/switch, eventProces, m_app_state->update, m_app_state->draw, and FPS control. m_app_state initially is pointing to an instance of Menu, then it is pointing to an instance of Game, Scores and Menu sequentially after invoke of nextState, which updates the state machine. Menu is the state to allow user to select player and quit, Scores is the state to show the score. So we most are interested of the Game state.

It would be complex to maintain two state machine on two hosts, even one is a mirror of the other. Need to clone all object on fly, with dynamic create and destroy, and need to merge input from both side, and one side needs to be the master for random number generation, state control by user. Also consider network latency, might be a bit messy to maintain the sync between server and client. The easier approach would be let the master to maintain the state machine, for client side, just relay the input to server, and mirror the render result from server.

(to be continue)

Sunday, June 30, 2024

Program D-Link DCS-5010L - Part 3

 As mentioned in Program D-Link DCS-5010L - Part 2, I wasn't able to successfully open DCS-5010L stream with OpenCV+GStreamer, so I'm going to explore OpenCV+ffmpeg. Ffmpeg is famous OpenSource Multimedia framework which contains a set of utility with almost all codec supported. Here is a Ffmpeg vs. GStreamer comparison.

To going this approach, there is FfmpegCV, which might be served as OpenCV replacement with ffmpeg support, has compatible API as OpenCV. Here I'm going to give it a try. Similar as using GStreamer, would need to install ffmpeg executable separately. Then do: pip install ffmpegcv

This will install the stable version, without cuda acceleration as I don't have nVidia GPU. After trying around, I realized that I need to use this API to open the stream:

cap = ffmpegcv.VideoCaptureStream(stream_url)

However, I don't see how it can help to play the audio.

Then I turned to PyAV. With that, I figured out that the 'video.cgi' would only provide video stream. Need to open 'audio.cgi' for the audio stream which is in wave format.

I added my code to forked dlink-dcs-python-lib github repository, with arrow key support for tile/pan the camera. There is no need to put the OpenCV video into a Tcl/tk gadget, but may create a TK GUI for setting configuration/options such as video file saving path, resolution, motion detecting and so on.

Saturday, June 22, 2024

Program D-Link DCS-5010L - Part 2

OpenCV is mainly focusing on video. It does not support audio directly. To playback audio, would rely on ffmpeg or GStreamer. ffmpeg is mostly a standalone offline multimedia converting tools. OpenCV has GStreamer built-in support, but it is optional. I used pip installed opencv-python on Windows for Python 3.12, and GStreamer isn't enabled. May refer to github opencv-python and https://discuss.bluerobotics.com/t/opencv-python-with-gstreamer-backend/8842. Even install the full package with pip install opencv-contrib-python won't get GStream enabled CV2. Might need to build with source. Either way, would need to install GStreamer first, which is available here: https://gstreamer.freedesktop.org/download. To build GStreamer enabled opencv-python:

git clone --recursive https://github.com/opencv/opencv-python.git
cd .\opencv-python
set CMAKE_ARGS="-DWITH_GSTREAMER=ON"
or $env:CMAKE_ARGS="-DWITH_GSTREAMER=ON" for PowerShell
pip wheel . --verbose

The build gets setuptools 59.2.0, I'm using miniConda and I'm seeing "conda 24.1.2 requires setuptools>=60.0.0, but you have setuptools 59.2.0 which is incompatible". This is not my environment setuptools out-of-date issue, have tried: pip install setuptools --upgrade, python -m pip install pip --upgrade, conda update -n base setuptools, nothing works. And after this, seeing:

  Running command Getting requirements to build wheel
  Traceback (most recent call last):
    File "C:\ProgramData\miniconda3\Lib\site-packages\pip\_vendor\pyproject_hooks\_in_process\_in_process.py", line 353, in <module>
      main()
...
    File "C:\Users\squ\AppData\Local\Temp\pip-build-env-lekizdpu\overlay\Lib\site-packages\pkg_resources\__init__.py", line 2172, in <module>
      register_finder(pkgutil.ImpImporter, find_on_path)
                      ^^^^^^^^^^^^^^^^^^^
  AttributeError: module 'pkgutil' has no attribute 'ImpImporter'. Did you mean: 'zipimporter'?
  error: subprocess-exited-with-error

Sounds same as opencv-python issues 988, one comment there mentioned:

I encountered the same problem while running pip wheel . --verbose |& tee install_opencv.log. It worked after removing the ==59.2.0" from the "setuptools==59.2.0" https://github.com/opencv/opencv-python/blob/4.x/pyproject.toml#L16.

The pyproject.toml file is under the root, with the setuptools line removed, the original problem is resolved. Build under Windows would need Visual Studio Build Tools (refer to Revisit 坦克大战编程)., need to do the build under the Native build environment. The build will try Ninja, Visual Studio and NMake generator, and will try from v144 to v141. I have v143, and have "Developer Command Prompt for VS 2022" environment launched, all Ninja and NMake will fail as no support of platform, VS v143 progress a bit far,  but still get error as:

 -- Trying 'Visual Studio 17 2022 x64 v143' generator

  -- The C compiler identification is unknown
  CMake Error at CMakeLists.txt:3 (ENABLE_LANGUAGE):
    No CMAKE_C_COMPILER could be found
.

After some research, turns out Windows SDK is needed (I deselected it when installing Visual Studio Build Tools to save some space). It takes a while to build, depends on your computer. But it works. The opencv_python-*.whl file will be generated in the root folder. Just run pip to install it.

At beginning of the build, it will show the configuration, make sure Gstreamer is ON. It will stay OFF if GStreamer develop isn't installed. And with Gstreamer is enabled, I'm getting runtime error as:

ImportError: DLL load failed while importing cv2: The specified module could not be found

which is similar as opencv-python issues 856. Installed Microsoft Visual C++ Redistributable for Visual Studio 2022, but that didn't help. My Windows is not the 'N' one, so not a problem of Windows Media Feature Pack. I have tried the build whl before Gstreamer option enabled and not seeing problem, so I believe the problem is with GStreamer dll, but adding "C:\gstreamer\1.0\msvc_x86_64\lib\gstreamer-1.0" to path didn't help. Have to hook up Sysinternals' Process Monitor, and it shows the run is looking for a bunch of gstreamer's runtime dll which were not in site-packages\cv2 folder. Added "C:\gstreamer\1.0\msvc_x86_64\bin" to environment variable 'PATH' didn't help. Copy dlls over works, but not a clean way. Per https://stackoverflow.com/questions/214852/python-module-dlls, since version python 3.8 they added a mechanism to do this more securely. Read documentation on os.add_dll_directory https://docs.python.org/3/library/os.html#os.add_dll_directory. So doing this in code would work:

import os
gst_root = os.getenv('GSTREAMER_1_0_ROOT_MSVC_X86_64', 'C:/gstreamer/1.0/msvc_x86_64/')
os.add_dll_directory(gst_root+'bin')
import cv2

With Gstreamer enabled, I'm still not able to open the stream with log as:

[ WARN:0@2.620] global cap_gstreamer.cpp:2840 cv::handleMessage OpenCV | GStreamer warning: Embedded video playback halted; module souphttpsrc0 reported: Could not establish connection to server.
[ WARN:0@2.623] global cap_gstreamer.cpp:1698 cv::GStreamerCapture::open OpenCV | GStreamer warning: unable to start pipeline
[ WARN:0@2.623] global cap_gstreamer.cpp:1173 cv::GStreamerCapture::isPipelinePlaying OpenCV | GStreamer warning: GStreamer: pipeline have not been created
[ WARN:0@2.623] global cap.cpp:206 cv::VideoCapture::open VIDEOIO(GSTREAMER): backend is generally available but can't be used to capture by name

A bit frustrate on this. The http url is all right as I can open it without using GStreamer. Most example I can find are using rtsp protocol but not http. Will leave this aside until I can figure out how to use GStreamer alone to open the stream. I'm going to try OpenCV+ffmpeg next

Friday, June 21, 2024

Program for D-Link DCS-5010L

 D-Link DCS-5010L is a pretty old generation Pan & Tilt WIFI network camera. It is so old, only can be accessed with old IE web browser as it needs ActiveX. For Edge, may work by using IE mode. And almost impossible to use Firefox or Chrome browser. And for some feature and function, would need java installed. And it might not work due to old firmware and interface.

D-Link's Android app even cannot connect to the camera. Give up on that as so frustrate. The TinyCam app can view and control the movement of the camera, but cannot change video resolution, codec type, and motion detection setting.

So come to the idea to use Python script to control and view the camera, more flexibility. Later, may think to make my own Android app for the camera. There is github dlink-dcs-python-lib can be used as lib or reference. This lib has unit test code, but does not have stream video viewer code. Github copilot suggests to use opencv-python for video processing and use requests for handling HTTP requests. Code like below, worked quite well:

import cv2
import requests
import numpy as np
import os

CAM_HOST = os.environ.get('CAM_HOST') or ''
CAM_PORT = os.environ.get('CAM_PORT', 80)
CAM_USER = os.getenv('CAM_USER', 'admin')
CAM_PASS = os.getenv('CAM_PASS', '')

# URL of the video stream
stream_url = f'http://{CAM_HOST}:{CAM_PORT}/video.cgi'

# Start a session
session = requests.Session()
response = session.get(stream_url, stream=True, auth=(CAM_USER, CAM_PASS))

# Check if the connection to the stream is successful
if response.status_code == 200:
    bytes_data = bytes()
    for chunk in response.iter_content(chunk_size=1024):
        bytes_data += chunk
        a = bytes_data.find(b'\xff\xd8')  # JPEG start
        b = bytes_data.find(b'\xff\xd9')  # JPEG end
        if a != -1 and b != -1:
            jpg = bytes_data[a:b+2]  # Extract the JPEG image
            bytes_data = bytes_data[b+2:]  # Remove the processed bytes
            frame = cv2.imdecode(np.frombuffer(jpg, dtype=np.uint8), cv2.IMREAD_COLOR)
            if frame is not None:
                cv2.imshow('Video Stream', frame)
                if cv2.waitKey(1) & 0xFF == ord('q'):  # Exit loop if 'q' is pressed
                    break
    cv2.destroyAllWindows()
else:
    print("Failed to connect to the stream.")

Above code is for viewing Motion Jpeg. To view h.264 stream, can use below code:

import cv2
import os

# URL of the H.264 video stream
CAM_HOST = os.environ.get('CAM_HOST') or ''
CAM_PORT = os.environ.get('CAM_PORT', 80)
CAM_USER = os.getenv('CAM_USER', 'admin')
CAM_PASS = os.getenv('CAM_PASS', '')

# URL of the video stream
stream_url = f'http://{CAM_USER}:{CAM_PASS}@{CAM_HOST}:{CAM_PORT}/video.cgi'

# Create a VideoCapture object
cap = cv2.VideoCapture(stream_url)

# Check if camera opened successfully
if not cap.isOpened():
    print("Error: Could not open video stream.")
else:
    # Read until video is completed
    while cap.isOpened():
        # Capture frame-by-frame
        ret, frame = cap.read()
        if ret:
            # Display the resulting frame
            cv2.imshow('Video Stream', frame)

            # Press Q on keyboard to exit
            if cv2.waitKey(1) & 0xFF == ord('q'):
                break
        else:
            break

# When everything done, release the video capture object
cap.release()

# Closes all the frames
cv2.destroyAllWindows()

Wednesday, June 19, 2024

Revisit 坦克大战编程

In 2021 I posted two blog regarding building Tank natively with Visual Studio Build tools:

When I trying to redo the build from scratch, I noticed some is missing in my post, for example, there is no 'Build' folder by default. No need of Cygwin. Also, I'm planning to add network support, so player can play the game over internet. So I decide to make another blog to explore this. So here is a complete step by step for creating the build:

  1. Download Visual Studio Build Tools. Go to https://visualstudio.microsoft.com/downloads/?q=build+tools, scroll down and looking for something like Build Tools for Visual Studio 2022. When install, only need select "Desktop Develoment with C++", and for optional modules, the build environment such as MSVC v143 and C++ CMake tools for Windows are needed. All other can be deselected to save some space.
  2. Run "git clone https://github.com/quyq/Tanks.git" in working directory to pull the source code
  3. Install SDL2 header file and lib. SDL2 binary can be download from:
  4. Start a Visual Studio Build Tools environment, and do:
    • create 'build' folder (such as 'mkdir build') under project root
    • change working directory to 'build' (i.e. run 'cd build'), then run:
      • cmake -G "NMake Makefiles" ..
      • Note: the two dots is a must which set the source folder as one level up, and this will create Makefile, out folder and several other files/folder under 'build'
    • Run 'NMake' which should create 'tank.exe' under 'build/out' folder. Resource files would be copied to there too.

For using "NMake Makefiles" generator, update settings.json as:

{
    "terminal.integrated.defaultProfile.windows": "Command Prompt",
    "cmake.generator": "NMake Makefiles",
}

The first line will change default terminal from "Power Shell" to "Command Prompt". The next line select the generator. By default, it will use Visual Studio 16 2019 generator.

Build under WSL might be much easier, just install make/g++ and SDL2 develop package, then run make. If using Win11+WSL2, then no other extra work needed. If running WSL2 on Win10, may need update to latest WSL which has systemd support for GUI. And for audio, may follow instruction from https://x410.dev/cookbook/wsl/enabling-sound-in-wsl-ubuntu-let-it-sing/ which has clear step by step instruction and does not open unnecessary permission for utilizing PulseAudio.

 

Saturday, May 11, 2024

Developing Android App with chart supported by MPAndroidChart

 MPAndroidChart (https://github.com/PhilJay/MPAndroidChart) is a powerful Android chart view / graph view library, supporting line- bar- pie- radar- bubble- and candlestick charts as well as scaling, panning and animations. As Open Source Project, the source code is free available from GitHub. There is general introduction, javadocs, and example code. But I cannot find a user guide document for how to use this lib from an Android project. This project was actively development several years ago. So it is using Java and Gradle Groovy. Unfortunately, latest Android Studio only support Gradle Kotlin, and no longer provides option to select java as language. I was having a lot of problem to run the example code. Also tried following other online example/tutorial using MPAndroidChart, but no luck either. That makes me decide to make this post to write down notes I have.

First, as mentioned here and several other posts, you would need to add 'jitpack.io' to your Project level Gradle file like this:

repositories {
maven { url 'https://jitpack.io' }
}

However, with recent Android Studio created project, if you add above to your root build.gradle file, you will get error like "Build was configured to prefer settings repositories over project repositories but repository 'maven' was added by build file 'build.gradle'". The correct way is adding it to the settings.gradle file like this:

dependencyResolutionManagement {
repositoriesMode.set(RepositoriesMode.FAIL_ON_PROJECT_REPOS)
repositories {
google()
mavenCentral()
maven { url 'https://jitpack.io' }
}
}

Second, if the project is using Gradle Kotlin, then would need to update settings.gradle.kts like this:

dependencyResolutionManagement {
repositoriesMode.set(RepositoriesMode.FAIL_ON_PROJECT_REPOS)
repositories {
google()
mavenCentral()
maven { setUrl("https://jitpack.io") }
}
}

Note, the syntax is different for the two types gradle file.

Wednesday, February 7, 2024

RISC-v on ZC706 Evaluation Board - Part VI: Building fesvr-zynq with Petalinux

 As of RISC-v on ZC706 Evaluation Board - Part V: Running Petalinux, I'm back to square one, need to figure out how to build fesvr-zynq with Petalinux. First, set all environment as I can:

source <path-to-installed-PetaLinux>/settings.sh
source <path-to-installed-PetaLinux>/components/yocto/buildtools/environment-setup-x86_64-petalinux-linux
source <path-to-installed-Xilinx>/Vitis/2022.1/settings64.sh
export PATH=$PATH:<path-to-installed-Xilinx>/Vitis/2022.1/gnu/aarch32/lin/gcc-arm-linux-gnueabi/x86_64-petalinux-linux/usr/bin/arm-xilinx-linux-gnueabi
cd <path-to-fpga-zynq>/zc706 && make fesvr-zynq

Now, it complain:
arm-xilinx-linux-gnueabi-g++ -O2 -std=c++11 -Wall -L fpga-zynq/common/build -lfesvr -Wl,-rpath,/usr/local/lib -I fpga-zynq/common/csrc -I fpga-zynq/testchipip/csrc -I fpga-zynq/rocket-chip/riscv-tools/riscv-fesvr/ -Wl,-rpath,/usr/local/lib  -o fpga-zynq/common/build/fesvr-zynq /mnt/ext4/fpga-zynq/common/csrc/fesvr_zynq.cc fpga-zynq/common/csrc/zynq_driver.cc fpga-zynq/testchipip/csrc/blkdev.cc

Vitis/2022.1/gnu/aarch32/lin/gcc-arm-linux-gnueabi/x86_64-petalinux-linux/usr/lib/arm-xilinx-linux-gnueabi/gcc/arm-xilinx-linux-gnueabi/11.2.0/include/stdint.h:9:16: fatal error: stdint.h: No such file or directory

   9 | # include_next <stdint.h>
     |                ^~~~~~~~~~

Still sounds like some configuration is missing for the build. With export CFLAGS/CPPFLAGS/CXXFLAGS or set them in make cmdline to "-I<path-to-Xilinx>/Vitis/2022.1/gnu/aarch32/lin/gcc-arm-linux-gnueabi/x86_64-petalinux-linux/usr/include" doesn't help either.

Search shows me a link from lowRISC as Building the front-end server:

# set up the RISCV environment variables
# set up the Xilinx environment variables
cd $TOP/riscv-tools/riscv-fesvr
mkdir build_fpga
cd build_fpga
../configure --host=arm-xilinx-linux-gnueabi
make -j$(nproc)

Once compilation has completed, you should find the following files:

ls -l fesvr-zedboard
ls -l libfesvr.so

To copy your new front-end server to the FPGA image:

cd $TOP/fpga-zynq/zedboard
make ramdisk-open
sudo cp $TOP/riscv-tools/riscv-fesvr/build_fpga/fesvr-zedboard \
  ramdisk/home/root/fesvr-zynq
sudo cp $TOP/riscv-tools/riscv-fesvr/build_fpga/libfesvr.so \
  ramdisk/usr/local/lib/libfesvr.so
make ramdisk-close
sudo rm -fr ramdisk

The proxy kernel (pk) used by the FPGA is the same one used in simulation. While not normally necessary, the proxy kernel can be recompiled using the following commands:

cd $TOP/fpga-zynq/zedboard
make ramdisk-open
sudo cp $TOP/riscv-tools/riscv-pk/build/pk ramdisk/home/root/pk
make ramdisk-close
sudo rm -fr ramdisk

lowRISC also has its risc-fesvr build instruction at fpga-zynq/README.md, slightly different from https://github.com/ucb-bar/fpga-zynq. And actually the two would behave same, if I use Xilinx 2016, which create a 'SDK' folder, and after run 'source SDK/2016.2/settings64.sh', 'make fesvr-zynq':

mkdir -p fpga-zynq/common/build
cd fpga-zynq/common/build && \
fpga-zynq/rocket-chip/riscv-tools/riscv-fesvr/configure \
        --host=arm-xilinx-linux-gnueabi
&& \
make libfesvr.so
checking build system type... x86_64-unknown-linux-gnu
checking host system type... arm-xilinx-linux-gnueabi
checking for arm-xilinx-linux-gnueabi-gcc... arm-xilinx-linux-gnueabi-gcc
checking whether the C compiler works... no
configure: error: in `fpga-zynq/common/build':
configure: error: C compiler cannot create executables
See `config.log' for more details

same error as following the lowRISC instructions. Now need to figure out the problem from the config.log file. The log file indicates several warnings for same thing:

fpga-zynq/rocket-chip/riscv-tools/riscv-fesvr/configure: line 2365: ~/SDK/2016.2/gnu/arm/lin/bin/arm-xilinx-linux-gnueabi-gcc: No such file or directory

The gcc compiler does exist, but file ./arm-xilinx-linux-gnueabi-gcc shows:
./arm-xilinx-linux-gnueabi-gcc: ELF 32-bit LSB executable, Intel 80386, version 1 (SYSV), dynamically linked, interpreter /lib/ld-linux.so.2, for GNU/Linux 2.6.16, stripped

which means I need to enable 32bit support in WSL as I mentioned in Run Linux on Windows - WSL, by doing:

sudo dpkg --add-architecture i386
sudo apt-get update
sudo apt install gcc:i386 gcc-multilib g++-multilib libc6:i386

With that, finally I'm able to do make fesvr-zynq, fesvr-zynq and libfesvr.so would be generated under fpga-zynq/common/build folder. When copying fesvr-zynq, also need to copy common/build/libfesvr.so to /usr/local/lib on the board. As mentioned in above lowRISC instructions and fpga-zynq, it is possible to recreate the ram disk, however, I'm getting this when trying the commands under WSL: cpio: dev/console: Cannot mknod: Operation not supported. Not sure whether this is WSL limitation or something I have missed. Give up on this for now. So I tried to copy the new executable. For that, might need to get IP from dhcp server if the board is connected to a network. Modifying /etc/network/interfaces with line 'iface eth0 inet dhcp', then do 'ifdown eth0' and 'ifup eth0' will temporarily work as the change of the interfaces file won't survive of a reboot. After successfully get IP from dhcp server, ssh may still not work with error: Unable to negotiate with a.b.c.d port 22: no matching key exchange method found. Their offer: diffie-hellman-group1-sha1,diffie-hellman-group14-sha1. Can try tftp as:

cd ~
tftp -g -r fesvr-zynq tftp_server
cd  /usr/local/lib
tftp -g -r libfesvr.so tftp_server

Now, run fesvr-zynq without argument would get usage print out (with the original fesvr-zynq executable, used to get "ERROR: No cores found" error, same error as running 'fesvr-zynq pk hello'), but still not able to load the bbl or run the hello code.

PS: README.md in fpga-zynq/rocket-chip/riscv-tools/fpga-fesvr shows:

This repository is deprecated; it has been absorbed into the Spike repository (https://github.com/riscv/riscv-isa-sim).


Sunday, February 4, 2024

Use Blink Mini camera without Amazon subscription

Blink Mini is a cheap camera. Auto detection is a bit awkward to me as it always detected change out of the region I set for motion detecting. And after one year, recording video stop working without subscription.

Luckily, there are Open Source solution, using Python, likely are all based off the documentation at: https://github.com/MattTW/BlinkMonitorProtocol

1) https://pypi.org/project/blink-cameras/ I didn't try it as likely the development was paused since May 2019

2) https://pypi.org/project/blinkpy, github: https://github.com/fronzbot/blinkpy.  This library was built with the intention of allowing easy communication with Blink camera systems, specifically to support the Blink component in homeassistant.

Following is note for using blinkpy.

The blinkpy github site has a brief introduction for how to use it. Information at the pypi.org page likely is out-of date, as module 'blinkpy' has no attribute 'Blink'.

When I try the example code from the Readme, I got this: 

Unclosed client session
client_session: <aiohttp.client.ClientSession object at 0x00000201A7EE2310>
Unclosed connector
connections: ['[(<aiohttp.client_proto.ResponseHandler object at 0x00000201A7EE6040>, 57822.015)]', '[(<aiohttp.client_proto.ResponseHandler object at 0x00000201A7F22460>, 57822.593)]']
connector: <aiohttp.connector.TCPConnector object at 0x00000201A7EE2370>
Fatal error on SSL transport
protocol: <asyncio.sslproto.SSLProtocol object at 0x00000201A7EE2970>
transport: <_ProactorSocketTransport fd=844 read=<_OverlappedFuture cancelled>>
Traceback (most recent call last):
  File "C:\miniconda3\lib\asyncio\sslproto.py", line 684, in _process_write_backlog
    self._transport.write(chunk)
  File "C:\miniconda3\lib\asyncio\proactor_events.py", line 359, in write
    self._loop_writing(data=bytes(data))
  File "C:\miniconda3\lib\asyncio\proactor_events.py", line 395, in _loop_writing
    self._write_fut = self._loop._proactor.send(self._sock, data)
AttributeError: 'NoneType' object has no attribute 'send'

Sounds like the connection isn't successfully established? Actually it isn't. I added more code to read the camera name and attribute, and all these information can be read back correctly before above error showing up. Likely, above error raised at the closing or exiting. As a comment for aiohttp issues 5941, loop._proactor is None means loop.close() was called before session.close() call.
This is incorrect; and not aiohttp problem.

Monday, January 29, 2024

RISC-v on ZC706 Evaluation Board - Part V: Running Petalinux

Follow up on RISC-v on ZC706 Evaluation Board - Part IV: petalinux. As mentioned in the previous post, I didn't have tftp setup on my host PC. With 'BOOT.BIN' copied to SD card, when booting up the board, U-Boot will look for tftp server, and trying to download Linux image from there. I guess it is possible to copy the Linux image to the SD card and boot from there. But let's setup the tftp server, then it be more convenience to update the Linux image without the need to transfer image to the SD card, why not? It's also possible to run SCP on the ZC706 EV board to copy image to SD card.

It's very easy to setup tftp (Trivial File Transfer Protocol). Searching Internet may recommend tftp-hpa, which is an enhanced version of the BSD TFTP client and server. It possesses a number of bugfixes and enhancements over the original. May follow this to setup tftp backed by xinetd. If cannot update /etc/xinetd.d/tftp file, and 'sudo kill -HUP pid_of_inetd' doesn't help, then try 'sudo kill -9 pid_of_inetd'. Then restart xinetd with: sudo /etc/init.d/xinetd restart

There might be multiple tftp servers on the same subnet, so may need to specify the server on the ZC706 board. This can be done with cmd:

setenv serverip 192.168.1.117

Somehow, it still try to download from the other tftp server. So I tried to boot the image with QEMU:

 source petalinux/settings.sh
 petalinux-boot --qemu --prebuilt 2

It can boot to Linux, and will ask for login credential, and 'root/root' doesn't work. Per https://docs.xilinx.com/r/en-US/ug1144-petalinux-tools-reference-guide/Login-Changes, the root account is disabled, The default user is petalinux and the password should be set on first boot. For me, using both 2 and 3 for prebuilt option will boot to Linux, but with 2, it will show more information for uboot boot.

Refer to QEMU User Documentation for how to use QEMU, such as:

To quit the emulation, press CTRL+A followed by X. To switch
between the serial port and the monitor, use CTRL+A followed by C.

Now, back to the UBoot with tftp, do a power cycle and press 'space' key within 3 seconds to pause UBoot. printenv shows:

arch=arm
baudrate=115200
board=zynq
board_name=zynq
boot_a_script=load ${devtype} ${devnum}:${distro_bootpart} ${scriptaddr} ${prefix}${script}; source ${scriptaddr}
boot_efi_binary=load ${devtype} ${devnum}:${distro_bootpart} ${kernel_addr_r} efi/boot/bootarm.efi; if fdt addr ${fdt_addr_r}; i
boot_efi_bootmgr=if fdt addr ${fdt_addr_r}; then bootefi bootmgr ${fdt_addr_r};else bootefi bootmgr;fi
boot_extlinux=sysboot ${devtype} ${devnum}:${distro_bootpart} any ${scriptaddr} ${prefix}${boot_syslinux_conf}
boot_net_usb_start=usb start
boot_prefixes=/ /boot/
boot_script_dhcp=boot.scr.uimg
boot_scripts=boot.scr.uimg boot.scr
boot_syslinux_conf=extlinux/extlinux.conf
boot_targets=mmc0 jtag mmc0 mmc1 qspi nand nor usb0 usb1 pxe dhcp
bootcmd=run distro_bootcmd
bootcmd_dhcp=devtype=dhcp; run boot_net_usb_start; if dhcp ${scriptaddr} ${boot_script_dhcp}; then source ${scriptaddr}; fi;set;
bootcmd_jtag=echo JTAG: Trying to boot script at ${scriptaddr} && source ${scriptaddr}; echo JTAG: SCRIPT FAILED: continuing...;
bootcmd_mmc0=devnum=0; run mmc_boot
bootcmd_mmc1=devnum=1; run mmc_boot
bootcmd_nand=nand info && nand read ${scriptaddr} ${script_offset_f} ${script_size_f} && echo NAND: Trying to boot script at ${;
bootcmd_nor=cp.b ${script_offset_nor} ${scriptaddr} ${script_size_f} && echo NOR: Trying to boot script at ${scriptaddr} && sou;
bootcmd_pxe=run boot_net_usb_start; dhcp; if pxe get; then pxe boot; fi
bootcmd_qspi=sf probe 0 0 0 && sf read ${scriptaddr} ${script_offset_f} ${script_size_f} && echo QSPI: Trying to boot script at;
bootcmd_usb0=devnum=0; run usb_boot
bootcmd_usb1=devnum=1; run usb_boot
bootcmd_usb_dfu0=setenv dfu_alt_info boot.scr ram $scriptaddr $script_size_f && dfu 0 ram 0 60 && echo DFU0: Trying to boot scr;
bootcmd_usb_dfu1=setenv dfu_alt_info boot.scr ram $scriptaddr $script_size_f && dfu 1 ram 1 60 && echo DFU1: Trying to boot scr;
bootcmd_usb_thor0=setenv dfu_alt_info boot.scr ram $scriptaddr $script_size_f && thordown 0 ram 0 && echo THOR0: Trying to boot;
bootcmd_usb_thor1=setenv dfu_alt_info boot.scr ram $scriptaddr $script_size_f && thordown 1 ram 1 && echo THOR1: Trying to boot;
bootdelay=2
bootfile=pxelinux.0
bootfstype=fat
bootm_low=0
bootm_size=30000000
cpu=armv7
dfu_alt_info=mmc 0:1=boot.bin fat 0 1;u-boot.img fat 0 1
distro_bootcmd=for target in ${boot_targets}; do run bootcmd_${target}; done
efi_dtb_prefixes=/ /dtb/ /dtb/current/
ethact=ethernet@e000b000
ethaddr=??:??:??:??:??:??
fdt_addr_r=0x1f00000
fdtcontroladdr=3eadf220
ipaddr=x.x.x.x
kernel_addr_r=0x2000000
load_efi_dtb=load ${devtype} ${devnum}:${distro_bootpart} ${fdt_addr_r} ${prefix}${efi_fdtfile}
loadaddr=0x0
mmc_boot=if mmc dev ${devnum}; then devtype=mmc; run scan_dev_for_boot_part; fi
modeboot=sdboot
pxefile_addr_r=0x2000000
ramdisk_addr_r=0x3100000
scan_dev_for_boot=echo Scanning ${devtype} ${devnum}:${distro_bootpart}...; for prefix in ${boot_prefixes}; do run scan_dev_for;
scan_dev_for_boot_part=part list ${devtype} ${devnum} -bootable devplist; env exists devplist || setenv devplist 1; for distro_t
scan_dev_for_efi=setenv efi_fdtfile ${fdtfile}; if test -z "${fdtfile}" -a -n "${soc}"; then setenv efi_fdtfile ${soc}-${board}e
scan_dev_for_extlinux=if test -e ${devtype} ${devnum}:${distro_bootpart} ${prefix}${boot_syslinux_conf}; then echo Found ${prefi
scan_dev_for_scripts=for script in ${boot_scripts}; do if test -e ${devtype} ${devnum}:${distro_bootpart} ${prefix}${script}; te
script_offset_f=9c0000
script_offset_nor=0xE2FC0000
script_size_f=0x40000
scriptaddr=3000000
serverip=y.y.y.y
soc=zynq
stderr=serial@e0001000
stdin=serial@e0001000
stdout=serial@e0001000
ubifs_boot=env exists bootubipart || env set bootubipart UBI; env exists bootubivol || env set bootubivol boot; if ubi part ${bi
usb_boot=usb start; if usb dev ${devnum}; then devtype=usb; run scan_dev_for_boot_part; fi
vendor=xilinx
ethaddr, ipaddr and serverip are all set valid value, somehow it will keep trying a different tftp server. With uboot paused, manually issue 'bootcmd' will make it able to load from the desired tftp server. May get lot of network error, but after multiple retries, it should be able to load pxelinux.0, rootfs.cpio.gz.u-boot, zImage, system.dtb, then start the kernel successfully. uname -a =>

Linux xilinx-zc706-2022_2 5.15.36-xilinx-v2022.2 #1 SMP PREEMPT Mon Oct 3 07:50:07 UTC 2022 armv7l GNU/Linux

So it is still running on the ARM core. Still need to figure out how to fix fesvr-zynq issue. Probably need to rebuild fesvr-zynq with petaliux SDK? Need to explore more.