<![CDATA[Diq's Den - www.couyon.net - Blog]]>Tue, 01 Dec 2015 17:54:59 -0800Weebly<![CDATA[If all of your MySQL SSL clients just broke...]]>Fri, 05 Jun 2015 17:30:42 GMThttp://www.couyon.net/blog/if-all-of-your-mysql-ssl-clients-just-brokeEDIT: It looks like RedHat pushed a new build that fixes the issue -> 
https://rhn.redhat.com/errata/RHBA-2015-1129.html

If all of your MYSQL SSL clients and replication just broke, I'm guessing that you're running RedHat, CentOS, or something derived from RedHat. In short, RH modified OpenSSL to reject Diffie Hellman (DH) keysizes less than 768 bits. Note that this is not the length of your private key. This is the DH key which is used in Perfect Forward Secrecy.
Someone please correct me if I'm wrong, but selecting which PFS ring (and hence keysize) to use is a function of the application making the SSL socket request. Therefore, you'll need an updated version of Percona or MySQL community to fix this with PFS ciphers.

One option is to use a secure, symmetric cipher in your configs. We went this route to keep things going. On CentOS 6, one of the best choices is Camellia 128. If you're doing command line work, add this to the argument list:
Enryption Cipher
    
You should also be able to add that to your my.cnf and get things going that way, too.

The error thrown in MySQL is error 2026, which is a catch-all for all SSL errors. It throws that error for bad certs, bad permissions, bad anything....which makes things difficult to track down. That said, if things broke in the past few days (June 4th), it's probably the cipher problem. We tracked it down in the yum log (
openssl-1.0.1e-30.el6.9.x86_64). Whenever something suddenly stops working, always check the yum log first. Thanks RedHat!
]]>
<![CDATA[Routed networks in VirtualBox]]>Fri, 22 May 2015 00:08:21 GMThttp://www.couyon.net/blog/routed-networks-in-virtualboxI recently had a project at work where the project's requirements forced me to think outside the box. They were:

  • Spin up hundreds or thousands of VM's dynamically without much intervention from netops. These VM's would exist across many hypervisors and be provisioned dynamically.
  • Allow inbound traffic to the guest VM's from networks outside the hypervisor.

The built-in VirtualBox networking options really wouldn't fit the bill. Here's why:

  • Bridged networking would require a lot of prior planning and work from netops. Would all the hypervisors be in the right VLAN to get the various subnets required? Some of the hypervisors might reside in smaller existing networks that would be /22-/24. Nope, too much work to support that across all of our locations.
  • Internal networks can only communicate with VM's residing on the same hypervisor. External access is a must.
  • Host-only networks only allow the VM to communicate with the hypervisor and other VM's. Similar to internal but not the outside world.
  • NAT networking would be a huge pain to map all of the inbound ports to VM ports. Reducing work was a goal here.
So what's the solution? Routed networking of course! What's that? There's no routed network option? Sure there is. It's called host-only networking with a local DHCP server. Next, you enable routing at the OS level. In Linux, set net.ipv4.ip_forward to 1 (for v4).

That solves the problem with VM's getting outside the hypervisor. The next problem is the return traffic knowing how to get back to the VM. Which hypervisor sent it? It's not bridged so it can't ARP. It's not NAT'd so it's not the hypervisor MAC. How do you point return traffic to the right server?

Your answer here is a dynamic routing protocol. We use OSPF, and bird is the word. A simple bird instance can advertise the IP network from vboxnet0 to the rest of your network over OSPF. Simple enough way to bring up and tear down a lot of networks with little effort. You don't have to use OSPF; you could use something else like IS-IS (which we also use but not here).

In summary, host-only network + DHCP server on hypervisor + routing + OSPF = routed VM networking in VirtualBox. It works really well and is being used for our in-house Selenium testing (taste.weebly.com).
]]>
<![CDATA[Ghost in the (embedded) Machine]]>Thu, 18 Dec 2014 18:29:07 GMThttp://www.couyon.net/blog/ghost-in-the-embedded-machineEDIT: Nevermind
]]>
<![CDATA[LDAP user/group auth in CentOS 7]]>Thu, 24 Jul 2014 16:27:23 GMThttp://www.couyon.net/blog/ldap-usergroup-auth-in-centos-7We're still going through our CentOS 7 kickstart/build process here at Weebly, and we're uncovering new twists every day. There are the obvious ones like systemd, random packages renamed, and things like that. Luckily, the process for enabling LDAP UNIX users and groups (and authentication) is the same process in CentOS 7 as it is CentOS 6. Looks like they're sticking with sssd. It had a few issues once included in CentOS 6, but it seems stable and reliable. That's about the most you can ask for in an authentication provider.

Here's a link to my post about enabling LDAP auth in CentOS 6. It should work in CentOS 7 -- at least it did for us.]]>