A friend of mine made fun of me for re-writing my website so many times, the latest iteration this time around is in NextJS. It moves my old Hugo blog to a new React, Tailwind CSS website with Typescript.
After crashing my RC Plane I decided to experiment with the flight controller using the cheapest possible RC planes I could build.
I bought foam board from Dollar Tree. Having the free designs from Flite Test (I’m using their Simple Cub), it quickly became apparent that printing out the designs, cutting, lining them up, gluing, cutting takes hours and hours to cut out a design.
A big thanks to those who helped from the Ardupilot Forum
Last week, I took a few extra days off from work to give myself a nice long break over Thanksgiving week.
I spent a lot of time finishing an RC plane we had some replacement parts for. This one was going to be autonomous. I spent a few days soldering the flight controller and electronics, gluing the plane together, measuring and trimming everything perfectly. It was perfectly balanced, perfectly weighted. We finally took it out for its maiden flight today, Sunday November 27. Sadly, this story ends in pieces.
At Meta, we have a few internal workflow orchestration engines. The idea is to allow long-running jobs, or things that can be split into tasks and need to have safe "checkpoints" behind them be defined in a system that can safely execute them and make sure they're executed in a durable, reliable and scalable way.
Last time I did hydroponic growing and germinated seeds in the rockwool, I found it annoying that the roots would grow out the bottom of the rockwool, then get squished by the ground, making it harder to have dangling roots to easily transfer to an NFT system.
In Network Engineering at
Ordering is a little off here, but I wanted to writeup about a bug I'm seeing in the ESPChess project (post to come) and how I intend to fix it.
As my hydroponic experiemnt has shown success, I wanted to scale this up. My plan is to build an NFT hydroponic system for the yard, and dutch bucket for tomatoes.
This year i've decided to start playing with Hydroponics as a hobby. I've been interested in starting with the Kratky Method of gardening. The idea is to grow lettuce in mason jars and see how growing without soil does. I've previously enjoyed growing my own alfalfa micro-greens, so this is the next obvious step.
My partner and I recently started renting an old place in Palo Alto. With COVID, we've been working from home for almost two years now, and sharing a desk at times things felt a little cramped (especially with the occsional electronics project).
When deciding what to attempt to grow where in the unit we're renting, I decided to perform a light study of the backyard. To perform this, first I had used the 3D Scanner App to perform a LiDAR scan of the house. Areas of interest are the fences and the walls of the house. Once a scan is performed, I exported this to obj format, and loaded it into my favourite Blender 3D
It's so easy to build react apps these days. Over two nights, I knocked up a quick website to be able to upload igc flight files, and an xctask files to locally compute if you managed to hit the targets.
After working at Facebook for 5 years, we get to have a recharge. (think of it long service leave). 4 weeks of break which I added 1 week of PTO with.
I've not been writing or blogging for some time. Over half a decade. It has been about 12 years since I started this blog, and a lot has changed since then. I've not found the time to document my tinkerings and projects as work has taken most of my time. I'm going to leave work related things on Linkedin, personal posts on Facebook, but I would like to continue using this site as my partner tends to use her diaries, as a way looking back at what I've been playing with.
I made this video a while ago and some people on YouTube seem to like it, so here is my very basic MPLS tutorial. Towards the end of the video there's a live demonstration showing the concepts with Wireshark
One of the biggest problems in the environment I work in is that almost all of the deployment of our 300+ devices is that everything has been hand crafted. Usually this isn't such a big problem, but add that with a design decision to route right to the access layer of our campus network with a multi-VRF network and you can start to see how mistakes, or changes in design along the deployment has meant inconsistencies. Not only that, but when it comes time to go and change something, that means going through and altering near 300 devices, a massive pain that is hard to scale.
At UOW we had a challenge. We wanted to allow proxy-free internet, but wanted to keep an eye on how much data was being consumed by what sort of users. For this we built Project Herbert http://uowits.github.io/herbert-gui/docs.html.
It uses netflow from inside our network and some syslog monitoring scripts to match up our private RFC1918 address space to the users who have it at that time, process the flows in near-realtime so we can adjust throttling and firewall policy to be reactive with the environment.
The idea was to build this as a distributed system and allow it to scale-out to deal with more load
It's often handy when dealing with infringement notices and the like to have NAT translations logged. Sure a better way would be to record netflow from these devices (and include the translations) but for a quick syslog solution, you can always enable by running the following...
We have an interesting problem at my workplace, we have an MPLS VPN design for separation of security zones (e.g., staff from students.) and we don't have MPLS support on our edge. With a L3 to the edge design though this means that while every edge switch has its own address space (per VRF), it also has a /30 uplink (once again, per VRF) back to the PE device.
At the organisation I'm currently working for, we recently experienced what appears to be a common issue, VLAN's trunked down to ESXi nodes were inconsistent.
In our DC, we're still running the old school Cisco Catalyst switches. If we were running a fabric, or Nexus switches we could put port profiles to action or if we lucky enough to have some equipment running Junos ❤️ we could be using apply-groups for this.
I've been happily running my Apple Airport Extreme as m home router for the past few years (since my debian router died, and I've been too lazy to replace it). One of the cool features was the ability to create a guest network (SSID) to access the internet without being able to access your trusted network. One feature I wanted was the ability to throttle the speed guests can access the internet at. While I couldn't do this with the Airport Extreme alone, Add a Juniper SRX100 into the mix that the awesome Cooper Lees gave me into the mix and problem solved.
I've never found a really simple video on what exactly Anycast is with a basic examples when exploring the concepts. I decided to lab it up and figured this might help some of you starting out with the concepts. Any comments feel free to let me know!
One recent pet gripe of mine has been having to add a new VLAN into our datacenter for our vSphere platform. Not that I trust my DCs switches with puppet just yet, this is a proof of concept post about how we could be using puppet to centrally manage this configuration and push it out across our DC.
This article is about using the Asterisk PBX and exploiting Google's voice recognitionAPI built for voice search in Chrome to build an address book that technology inept people (my grandmother) can use to place cheap telephone calls over VoIP.
This tool is built for my grandmother; a lady who has macular degeneration making her legally blind. She doesn't want to invest a great deal of money in this solution or have much of a learning curve, it took long enough to get her using the two button audio book solution on the iPod. The basic idea is to purchase a direct inward dialing (DID) number and program it into her speed dial, this will connect to an Asterisk virtual machine that will launch the voice recognition to listen to who she wants to dial, look up the number in her address book then connect the call through, all at the rate of about $0.11 for a national call (actually saving her money!).
So lately I've been thinking about my backup strategy on my Mac. From previous posts you might know I've build my OpenIndiana ZFS FileServer. Well, just created a volume and decided to put 300GB to good use to create a time machine on my mac. There is a brilliant guide on how to do it here and suggest you all take a look (Thanks for the awesome guide Marco).
Just finishing off a few things at work this week. We've got a few sites around the place where we have HA internet powered by two Juniper SRX100's. The Two SRX100's operate in a Chassis Cluster and peer with our ISP using BGP across both active/passive devices.
We all know the problem, some sites are restricted to certain countries based on the IP address you're using to view them. When trying to access over-seas, some solutions are HTTP proxies, Socks proxies and the like. The problem I have with all of these is that they're annoying to set up whenever you want to to view the site and I don't want to have to do that for all my devices (iPad, computer, etc).
This solution will tunnel only the sites you want over a VPN connection to be NAT'd out the other end, all on the gateway
It’s all good having a redundant network design, but putting web servers and the like on our hypervisors doesn’t make them redundant. In the event where there’s a failure on one of our servers, all virtual machines on that server will die. Looking at our previous network design, we can see a failure on a web server or database server would cause service outage.
In one of my last little tasks at work, I was asked to eliminate single points of failure in the software and hardware stack without spending a fortune on hardware or software licenses. During the process of ensuring high availability (HA), I realized that many small companies might have similar need, but with more pressing tasks and limited man hours, without a post that talks about all the issues and solutions in one place, many companies and organisations tend to leave single points of failure living with the chance that they’re not going to fail any time soon.
EDIT:It looks like Google has recently starting peering in more places in AU with an anycast solution that fixes these issues.
On a little side note to the tutorial series I've been writing up lately for building a ZFS fileserver. This one is about Why Google DNS is bad for your performance (well, depending on where you live) A real quick run down, we all know what DNS does yeah? It translates domains like www.scottyob.com into IP addresses like 188.8.131.52. A DNS server has a job of translating these domain names to the IP addresses we can use.
This is in the series of setting up a fileserver. We go through the process of setting up NFS & Samba.
Note: This post is one in a series aimed to be a tutorial eventually, it’s not currently finalised and at the moment exists as a place for collating thought and collecting feedback
Setting up the FileSystems is a trivial task. First, you can see that when we’ve created a storage pool ‘datastore’ it created a filesystem for us (also called datastore) that can act as a container for child file systems. I’m going to go ahead and create a place to store my media and downloads now
This is a post of a series for setting up a Fileserver with ZFS and OpenIndiana. This post runs through setting up the operating system.
I want to write a quick and dirty blog post to tell you a little solution on downloading HTTP files in your off-peak usage using linux.
As you may have been aware from my previous blog posts, I've been trying to make my life digital, that means any papers I get, I scan and file on a FileServer (with remote backups, etc, etc).
My scanner at home has a document feeder on it. The problem is that it doesn't do duplex, only a set of sides. So far, I can scan one side of the document, flip the paper of, then scan the back pages. This will result two PDF's with two sets of pages ``` Set A: 1,3,5,7 SET B: 8,6,4,2 ````
Is your Apple Mac feeling slow?
(this is a joke)
I've recently come up across some problems with the CRUD generator in Symfony, so in case anyone is googling for a solution out there, I'll try and help you along (or at least bump up the page references to the articles that helped me 😉 )
It’s a new year and time to start getting new years resolutions into action. I’ve moved into my new area in the study so I’ve started setting it up how I like.
The first step is to get my router set up. The router I’m using is a little ‘fit-pc’ box with two ethernet cables. As you can see, it’s pretty tiny but doesn’t pack much in terms of power.
so I've moved back home with the family now. One thing I've been meaning to do is to digitise my life and get set up on the cloud. I think it'll be really basic to start off with. VPN access in to my home network, firewall everything off, then create a few layers of the network.
so, since I had the scare with my Asterisk VoIP box being hacked and a telephone call to Antarctica having being made, I decided to do something about it....
It has made me realise a few things though. I mean, this guy is using open source tools to create some amazing things. I don't code half as much as I'd like to any more.. this is going to change. I have never used emacs before.. This is going to change. I need a new computer, that's pretty much it for now
In my previous post I mentioned how I didn't back up or migrate any of my data before we stopped paying the hosting company, so it's all lost.
This has me thinking how much of a shame it would be if I build a wealth of information or a blog that I can use to identify myself and my work, only to have it go if the machine it's hosted on dies. That would be bad, so this post is a short tutorial on how perform nightly backups of your website without you having to lift a finger.
Hello everyone. Ok, so this is a brand new blog. What happened to all my old content? I'd like to say I just wanted to start fresh, but truth is, I was lazy and didn't back up any data from my hosting company before my term ended with them.. so I lost my database and content. I'll re-post the good stuff, hopefully more in-depth on this site though. Stay tuned.
This blog is going to be a lot more personal then my last. It's going to have technical and personal posts where I'll post about anything and everything that takes my fancy. Good times 🙂