OpenWRT Netgear WNR2200

OpenWrt Wiki page

https://oldwiki.archive.openwrt.org/toh/netgear/wnr2200

Get the package manager working

1)disable ipv6 to run updates as wget will return error 4 and you will not be able to opkg update

add option ipv6 0 to disable IPv6 for the network interface in /etc/config/network. Opkg works after restart.

alternatively, as per forum https://forum.archive.openwrt.org/viewtopic.php?id=60799

cd /etc/config
vi firewall
# Uncomment this line to disable ipv6 rules
        option disable_ipv6     1 
cd /etc/
vi sysctl.conf
net.ipv6.conf.all.disable_ipv6 = 1 

2)fix https

openwrt servers now redirect to https, so to use the default wget that does not support https, use an alternative mirror. If you had space, you could update wget to support https. with 8mb, you don’t, if you want to install anything else.

As outlined here https://forum.openwrt.org/t/opkg-update-error-wget-returned-4/637/6

add option ipv6 0 to disable IPv6 for the network interface in /etc/config/network. Opkg works after restart.

3)ssh, opkg update

External drive / storage with OpenWRT

If you want to use an external usb drive, you’re out of luck. Did not work for me. Pivot is done before USB scsi loads. Patches did not work. Gargoyles external packages kinda work.

https://openwrt.org/docs/guide-user/additional-software/extroot_configuration#new.external.overlay.variant.pivot.overlay

Unbricking OpenWRT Netgear WNR2200

Read both

https://openwrt.org/docs/guide-user/installation/generic.flashing.tftp

https://openwrt.org/toh/netgear/wnr2200

If you run out of free space or otherwise brick your router

Turn off, hold in the reset button, turn on, hold the reset button in till the led switches color. tftp your original image.

The tftp expect is rather short, so what worked for me is having tftp up and running on my GNU/Linux machine and having the router connected via Eth cable, laptop to port1 (ethernet, not the different colored one used for internet connection)and then hitting the reset cycle.

Otherwise I was getting timeouts.

 

TTY / TTL terminal connection

TTL the pins are allready on the board.

Make sure your usb adapter is set to 3.3 v and there is no need to connect +V, just Ground, TX and RX.

To connect in GNU/Linux

screen /dev/ttyUSB0 115200
screen /dev/ttyUSB0 115200,cs8,-ixon,-ixoff,-istrip

Audio feedback with a buzzer.

plus to TX, 0(minus) to Ground

a 12v active piezo buzzer ripped out of a salvaged alarm panel module provides a nice modem like buzz when the router booots and prints to screen. a 5v buzzer was giving a nasty hum. Easy way to hear your router boot and enjoy some dial-up nostalgia.

 

Get started with AI in less than 10 seconds

Audience: absolute beginners in AI and development

Getting started with AI is not hard. Read this and edit Your first Autonomous Car AI script in less than 10 seconds.

you-say-you-like-it-y-u-no-put-ai-on-it

The good news: You can get started immediately. You don’t have to know how to code or have a fancy computer, and You can study enough free courses, papers and materials to enter the top 20% people in AI today. Because frankly, the entrance bar is ridiculously low. People prefer celebrity gossip. Less competition for You.

The bad: Getting to the true top is another story.

  • What you need: a computer running current chrome browser ( for the in-browser script)
  • Ability to read and follow instructions
  • Desire to learn
  • That’s it!

In Your chrome browser, Go to MIT Selfdriving car Deeptraffic page  that simulates a car in traffic and edit the script to change the settings.

The fastest setting to change is add more (red by default) cars. Just change the line

otherAgents = 0; // max of 10

to any other number from 1 to 10 and click button “Apply code and reset net”.

Congratulations, You’ve just changed Your first AI script operating a Self-Driving car! That reading skill? If you have the courage to read the paper available on the same page and follow instructions, you can change the script to easily pass the 65MPH grading requirement. Literally, play with 8 variables. All the variables are outlined in the paper.

Now take that free MIT self-driving course to get started. Read, watch, learn. Did I mention free?

If You want to go further

Get GNU/Linux. Get started with Ubuntu.

Start dual-booting  GNU/Linux today if You’re not ready to switch completely. Remember that skill, reading? You are literally several minutes away from booting your PC (or even Mac) into the same environment all the big players are using for their AI research and self-driving cars. Just get an UBUNTU live demo and see how easy it is

This tech is not going anywhere,  until AI takes over the world.  And that Android phone is (somewhat) running GNU/Linux as well.  Apple OSX? That’s BSD, a *nix flavor. I can launch a terminal window on OSX and use a lot of the same commands i do in  GNU/Linux.  Even microsofts AI efforts are largely based on FOSS developed outside of microsoft.

After installing Ubuntu GNU/Linux, start learning Python. Try Tensorflow. Let me know how it goes.

 

 

Tesla HW3 , AP3, hardware 3, V9+ Autopilot, NN chip – what we know

What we know as of today for HW3 (AP3), Teslas NN chip and V9 and above

2018 – Partner no more – Mobileye and Nvidia dumped as suppliers of autopilot  ( the only sure information )

Tesla dumped Mobileye (EyeQ3). Mobileye is working with BMW and Intel and has been a good flirt with regulators, but apparently was not moving fast enough, with current estimate of L4 set for 2021.

Looks like Tesla dumped Nvidia as well. The Nvidia PX2 was rumored to cost up to $15000 per unit, with bulk prices eventually targeted to cost in the 2000 dollars range. Outside of the cost of retroffiting all the Tesla the vehicles sold on the promise of potential full autonomy, one benefit of having Tesla developed /specced chips would be better power consumption, always a consideration in electric vehicles.

2019 – early to mid – planned delivery of Tesla HW3 with Teslas own NN chip

Musk tweets in 2018 announced the chip is improving Tesla Autopilot performance between 500% and 2000%.

 

Version 9 /HW3 – everything below is gossip only

  • Camera agnostic. At the limit of what the before HW3 hardware could provide. For more info, see this forum post from 2018 by Jimmy_d
  • Depth through stereo+ – using all cameras allows for 360 vision, better depth perception and thus better object recognition
  • One network handles all 8 cameras
  • Same weight file being used for all cameras (this has pretty interesting implications and previously V8 main/narrow seems to have had separate weights for each camera)
  • Processed resolution of 3 front cameras and back camera: 1280×960 (full camera resolution)
  • Processed resolution of pillar and repeater cameras: 640×480 (1/2×1/2 of camera’s true resolution)
  • all cameras: 3 color channels, 2 frames (2 frames also has very interesting implications) (was 640×416, 2 color channels, 1 frame, only main and narrow in V8)

“This V9 network is a monster, and that’s not the half of it. When you increase the number of parameters (weights) in an NN by a factor of 5 you don’t just get 5 times the capacity and need 5 times as much training data. In terms of expressive capacity increase it’s more akin to a number with 5 times as many digits. So if V8’s expressive capacity was 10, V9’s capacity is more like 100,000. It’s a mind boggling expansion of raw capacity. And likewise the amount of training data doesn’t go up by a mere 5x. It probably takes at least thousands and perhaps millions of times more data to fully utilize a network that has 5x as many parameters.

This network is far larger than any vision NN I’ve seen publicly disclosed and I’m just reeling at the thought of how much data it must take to train it. I sat on this estimate for a long time because I thought that I must have made a mistake. But going over it again and again I find that it’s not my calculations that were off, it’s my expectations that were off.”

“Scaling computational power, training data, and industrial resources plays to Tesla’s strengths and involves less uncertainty than potentially more powerful but less mature techniques. At the same time Tesla is doubling down on their ‘vision first / all neural networks’ approach and, as far as I can tell, it seems to be going well.”

Tesla plans for hw3  – release mid 2019

  • retrofit  in vehicles with existing 2.0 and 2.5 versions for those who buy  Full Self-Driving Capability Package when its released.

Who / what is making the chip and the pilot hardware?

Tesla likes making everything in-house. Even the seats. So what is happening with the NN chip and the pilot hardware?

We know Tesla poached top people from AMD. We know samsung is supplying something. But what is it and who is going to make it?

Q3 Earnings call, October 2018, Pete Bannon (former chip designer at Apple, now Teslas engineering head) transcript

“Hi, this is Pete Bannon. The Hardware 3 design is continuing to move along. Over the last quarter, we’ve completed qualification of the silicon, qualification of the board. We started the manufacturing line, qualification of the manufacturing line. We’ve been validating the provisioning flows in the factory. We built test versions of Model S, X and 3 in the factory to validate all the fit and finish of the parts and all the provisioning flows.​”

Q3 statement from Andrew Karpathy

“we are currently at a place where we trained large neural networks that work very well, but we are not able to deploy them to the fleet due to computational constraints. So, all of this will change with the next iteration of the hardware. And it’s a massive step improvement in the compute capability. And the team is incredibly excited to get these networks out there.”

What we know today – rumors

from this reddit thread

  • Samsung Exynos 7xxx SoC, based on the existence of ARM A72 cores (this would not be a super new SoC, as the Exynos SoC is about an Oct 2015 vintage). HW3 CPU cores are clocked at 1.6GHz, with a MALI GPU at 250MHz and memory speed 533MHz.
  • HW3 architecture is similar to HW2.5 in that there are two separate compute nodes (called “sides”): the “A” side that does all the work and the “B” side that currently does not do anything.
  • Also, it appears there are some devices attached to this SoC. Obviously, there is some emmc storage, but more importantly there’s a Tesla PCI-Ex device named “TRIP” that works as the NN accelerator. The name might be an acronym for “Tensor <something> Inference Processor”. In fact, there are at least two such “TRIP” devices, and maybe possibly two per “side”.

Uber Arizona crash / accident – investigation report

The key points of interest from the preliminary report, emphasis and (comments) mine  report source

The vehicle was factory equipped with several advanced driver assistance functions by Volvo Cars, the original manufacturer. The systems included a collision avoidance function with automatic emergency braking, known as City Safety, as well as functions for detecting driver alertness and road sign information. All these Volvo functions are disabled when the test vehicle is operated in computer control but are operational when the vehicle is operated in manual control.

….

According to Uber, the developmental self-driving system relies on an attentive operator to intervene if the system fails to perform appropriately during testing. In addition, the operator is responsible for monitoring diagnostic messages that appear on an interface in the center stack of the vehicle dash and tagging events of interest for subsequent review.

According to Uber, emergency braking maneuvers are not enabled while the vehicle is under computer control, to reduce the potential for erratic vehicle behavior. The vehicle operator is relied on to intervene and take action. The system is not designed to alert the operator.

In a postcrash interview with NTSB investigators, the vehicle operator
stated that she had been monitoring the self-driving system interface. The operator further stated that although her personal and business phones were in the vehicle, neither was in use until after the crash, when she called 911.

(no information on whether the NTSB investigators requested subpoena of operator/driver phone records available at this time)

(The rumored Arizona Tempe Police Department’s 318 page accident report states that Vasquez’s Hulu account had accessed the Voice TV show for 42 minutes before the crash, and had continued access till accident occurred.)

N.B. Arizona, one of the few US states, does not halve laws against use of cellphones while operating vehicles on public roads and highways.

Autonomous cars – self-driving cars glossary, frequently used abbreviations, terminology

Under update

SAE j3016 – Taxonomy and Definitions for Terms Related to On-Road Motor Vehicle Automated Driving Systems.

sae-levels

U.S. DoT’s official reference for defining the five levels of vehicle autonomy.

https://www.sae.org/standards/content/j3016_201401/

ADAS  – (not so ) ADVANCED DRIVER ASSISTANCE SYSTEM

ACC – ADAPTIVE CRUISE CONTROL

Cruise control system, ideally automatically adapts speed to have safe distance from obstacles in front.

AEB – AUTOMATIC EMERGENCY BRAKING, AUTONOMOUS EMERGENCY BRAKING

Autobrake to prevent collision with front obstacle. Good luck against hijackers though. 

CIB – CRASH IMMINENT BRAKING, COLLISION IMMINENT BRAKING

APS – AUTOMATIC PARKING SYSTEM

IPAS – INTELLIGENT PARKING ASSIST SYSTEM

BSD – BLIND SPOT DETECTION
BSM – BLIND SPOT MONITORING
BSW – BLIND SPOT WARNING

BOP – BACK-OVER PROTECTION, BACK-OVER PREVENTION

HUD – HEAD-UP-DISPLAY (on windshield glass)

UPA – ULTRASONIC PARK ASSIST

DMS Driver Monitoring System

FlexRay

https://en.wikipedia.org/wiki/FlexRay

OTA – Over The Air Updates. Software update without going to dealership.

SAD – SEMI-AUTONOMOUS DRIVING

SLAM

LIDAR  ( light (laser) radar ) https://en.wikipedia.org/wiki/Lidar

Positioning / orientation

RADAR radio detection and ranging https://en.wikipedia.org/wiki/Radar

FMEA – Failure Mode Effects Analysis

IMU – Inertial Measurement Unit (orient without gps) https://en.wikipedia.org/wiki/Inertial_measurement_unit

FLIR (infrared camera) https://en.wikipedia.org/wiki/Forward-looking_infrared

GPS geo positioning with satellites 

PPP https://en.wikipedia.org/wiki/Precise_Point_Positioning for gps

RTK https://en.wikipedia.org/wiki/Real-time_kinematic