Department of Chemistry

...California State University Stanislaus

  • Increase font size
  • Default font size
  • Decrease font size
Home News Feeds Planet Ubuntu
Newsfeeds
Planet Ubuntu
Planet Ubuntu - http://planet.ubuntu.com/

  • Costales: "Folder Color" app: Change the color of your folders in Ubuntu
    A simple, easy, fast and useful app! Change the color of your folders in Nautilus in a really easy way, so that you can get a better visual layout!

    Folder Color in Ubuntu

    How to install? Just enter this command into a Terminal, logout and enjoy it!
    sudo add-apt-repository ppa:costales/folder-color ; sudo apt-get update ; sudo apt-get install folder-color -y

    More info.

  • Mark Shuttleworth: U talking to me?

    This upstirring undertaking Ubuntu is, as my colleague MPT explains, performance art. Not only must it be art, it must also perform, and that on a deadline. So many thanks and much credit to the teams and individuals who made our most recent release, the Trusty Tahr, into the gem of 14.04 LTS. And after the uproarious ululation and post-release respite, it’s time to open the floodgates to umpteen pent-up changes and begin shaping our next show.

    The discipline of an LTS constrains our creativity – our users appreciate the results of a focused effort on performance and stability and maintainability, and we appreciate the spring cleaning that comes with a focus on technical debt. But the point of spring cleaning is to make room for fresh ideas and new art, and our next release has to raise the roof in that regard. And what a spectacular time to be unleashing creativity in Ubuntu. We have the foundations of convergence so beautifully demonstrated by our core apps teams – with examples that shine on phone and tablet and PC. And we have equally interesting innovation landed in the foundational LXC 1.0, the fastest, lightest virtual machines on the planet, born and raised on Ubuntu. With an LTS hot off the press, now is the time to refresh the foundations of the next generation of Linux: faster, smaller, better scaled and better maintained. We’re in a unique position to bring useful change to the ubiquitary Ubuntu developer, that hardy and precise pioneer of frontiers new and potent.

    That future Ubuntu developer wants to deliver app updates instantly to users everywhere; we can make that possible. They want to deploy distributed brilliance instantly on all the clouds and all the hardware. We’ll make that possible. They want PAAS and SAAS and an Internet of Things that Don’t Bite, let’s make that possible. If free software is to fulfil it’s true promise it needs to be useful for people putting precious parts into production, and we’ll stand by our commitment that Ubuntu be the most useful platform for free software developers who carry the responsibilities of Dev and Ops.

    It’s a good time to shine a light on umbrageous if understandably imminent undulations in the landscape we love – time to bring systemd to the centre of Ubuntu, time to untwist ourselves from Python 2.x and time to walk a little uphill and, thereby, upstream. Time to purge the ugsome and prune the unusable. We’ve all got our ucky code, and now’s a good time to stand united in favour of the useful over the uncolike and the utile over the uncous. It’s not a time to become unhinged or ultrafidian, just a time for careful review and consideration of business as usual.

    So bring your upstanding best to the table – or the forum – or the mailing list – and let’s make something amazing. Something unified and upright, something about which we can be universally proud. And since we’re getting that once-every-two-years chance to make fresh starts and dream unconstrained dreams about what the future should look like, we may as well go all out and give it a dreamlike name. Let’s get going on the utopic unicorn. Give it stick. See you at vUDS.



  • Martin Pitt: Booting Ubuntu with systemd: Test packages available

    On the last UDS we talked about migrating from upstart to systemd to boot Ubuntu, after Mark announced that Ubuntu will follow Debian in that regard. There’s a lot of work to do, but it parallelizes well once developers can run systemd on their workstations or in VMs easily and the system boots up enough to still be able to work with it.

    So today I merged our systemd package with Debian again, dropped the systemd-services split (which wasn’t accepted by Debian and will be unnecessary now), and put it into my systemd PPA. Quite surprisingly, this booted a fresh 14.04 VM pretty much right away (of course there’s no Plymouth prettiness). The main two things which were missing were NetworkManager and lightdm, as these don’t have an init.d script at all (NM) or it isn’t enabled (lightdm). Thus the PPA also contains updated packages for these two which provide a proper systemd unit. With that, the desktop is pretty much fully working, except for some details like cron not running. I didn’t go through /etc/init/*.conf with a small comb yet to check which upstart jobs need to be ported, that’s now part of the TODO list.

    So, if you want to help with that, or just test and tell us what’s wrong, take the plunge. In a 14.04 VM (or real machine if you feel adventurous), do

      sudo add-apt-repository ppa:pitti/systemd
      sudo apt-get update
      sudo apt-get dist-upgrade
    

    This will replace systemd-services with systemd, update network-manager and lightdm, and a few libraries. Up to now, when you reboot you’ll still get good old upstart. To actually boot with systemd, press Shift during boot to get the grub menu, edit the Ubuntu stanza, and append this to the linux line: init=/lib/systemd/systemd.

    For the record, if pressing shift doesn’t work for you (too fast, VM, or similar), enable the grub menu with

      sudo sed -i '/GRUB_HIDDEN_TIMEOUT/ s/^/#/' /etc/default/grub
      sudo update-grub
    

    Once you are satisfied that your system boots well enough, you can make this permanent by adding the init= option to /etc/default/grub (and possibly remove the comment sign from the GRUB_HIDDEN_TIMEOUT lines) and run sudo update-grub again. To go back to upstart, just edit the file again, remove the init=sudo update-grub again.

    I’ll be on the Debian systemd/GNOME sprint next weekend, so I feel reasonably well prepared now. :-)

    Update: As the comments pointed out, this bricked /etc/resolv.conf. I now uploaded a resolvconf package to the PPA which provides the missing unit (counterpart to the /etc/init/resolvconf.conf upstart job) and this now works fine. If you are in that situation, please boot with upstart, and do the following to clean up:

      sudo rm /etc/resolv.conf
      sudo ln -s ../run/resolvconf/resolv.conf /etc/resolv.conf
    

    Then you can boot back to systemd.



  • Svetlana Belkin: vBlog Teaser

    I’m thinking of doing a vBlog about Ubuntu and other things:




  • Adam Stokes: new juju plugin: juju-sos

    Juju sos is my entryway into Go code and the juju internals. This plugin will execute and pull sosreports from all machines known to juju or a specific machine of your choice and copy them locally on your machine.

    An example of what this plugin does, first, some output of juju status to give you an idea of the machines I have:

    ┌[poe@cloudymeatballs] [/dev/pts/1] 
    └[~]> juju status
    environment: local
    machines:
      "0":
        agent-state: started
        agent-version: 1.18.1.1
        dns-name: localhost
        instance-id: localhost
        series: trusty
      "1":
        agent-state: started
        agent-version: 1.18.1.1
        dns-name: 10.0.3.27
        instance-id: poe-local-machine-1
        series: trusty
        hardware: arch=amd64 cpu-cores=1 mem=2048M root-disk=8192M
      "2":
        agent-state: started
        agent-version: 1.18.1.1
        dns-name: 10.0.3.19
        instance-id: poe-local-machine-2
        series: trusty
        hardware: arch=amd64 cpu-cores=1 mem=2048M root-disk=8192M
    services:
      keystone:
        charm: cs:trusty/keystone-2
        exposed: false
        relations:
          cluster:
          - keystone
          identity-service:
          - openstack-dashboard
        units:
          keystone/0:
            agent-state: started
            agent-version: 1.18.1.1
            machine: "2"
            public-address: 10.0.3.19
      openstack-dashboard:
        charm: cs:trusty/openstack-dashboard-0
        exposed: false
        relations:
          cluster:
          - openstack-dashboard
          identity-service:
          - keystone
        units:
          openstack-dashboard/0:
            agent-state: started
            agent-version: 1.18.1.1
            machine: "1"
            open-ports:
            - 80/tcp
            - 443/tcp
            public-address: 10.0.3.27
    

    Basically what we are looking at is 2 machines that are running various services on them in my case Openstack Horizon and Keystone. Now suppose I have some issues with my juju machines and openstack and I need a quick way to gather a bunch of data on those machines and send them to someone who can help. With my juju-sos plugin, I can quickly gather sosreports on each of the machines I care about in as little typing as possible.

    Here is the output from juju sos querying all machines known to juju:

    ┌[poe@cloudymeatballs] [/dev/pts/1] 
    └[~]> juju sos -d ~/scratch
    2014-04-23 05:30:47 INFO juju.provider.local environprovider.go:40 opening environment "local"
    2014-04-23 05:30:47 INFO juju.state open.go:81 opening state, mongo addresses: ["10.0.3.1:37017"]; entity ""
    2014-04-23 05:30:47 INFO juju.state open.go:133 dialled mongo successfully
    2014-04-23 05:30:47 INFO juju.sos.cmd cmd.go:53 Querying all machines
    2014-04-23 05:30:47 INFO juju.sos.cmd cmd.go:59 Adding machine(1)
    2014-04-23 05:30:47 INFO juju.sos.cmd cmd.go:59 Adding machine(2)
    2014-04-23 05:30:47 INFO juju.sos.cmd cmd.go:88 Capturing sosreport for machine 1
    2014-04-23 05:30:55 INFO juju.sos main.go:119 Copying archive to "/home/poe/scratch"
    2014-04-23 05:30:56 INFO juju.sos.cmd cmd.go:88 Capturing sosreport for machine 2
    2014-04-23 05:31:08 INFO juju.sos main.go:119 Copying archive to "/home/poe/scratch"
    ┌[poe@cloudymeatballs] [/dev/pts/1] 
    └[~]> ls $HOME/scratch
    sosreport-ubuntu-20140423040507.tar.xz  sosreport-ubuntu-20140423052125.tar.xz  sosreport-ubuntu-20140423052545.tar.xz
    sosreport-ubuntu-20140423050401.tar.xz  sosreport-ubuntu-20140423052223.tar.xz  sosreport-ubuntu-20140423052600.tar.xz
    sosreport-ubuntu-20140423050727.tar.xz  sosreport-ubuntu-20140423052330.tar.xz  sosreport-ubuntu-20140423052610.tar.xz
    sosreport-ubuntu-20140423051436.tar.xz  sosreport-ubuntu-20140423052348.tar.xz  sosreport-ubuntu-20140423053052.tar.xz
    sosreport-ubuntu-20140423051635.tar.xz  sosreport-ubuntu-20140423052450.tar.xz  sosreport-ubuntu-20140423053101.tar.xz
    sosreport-ubuntu-20140423052006.tar.xz  sosreport-ubuntu-20140423052532.tar.xz
    

    Another example of juju sos just capturing a sosreport from one machine:

    ┌[poe@cloudymeatballs] [/dev/pts/1] 
    └[~]> juju sos -d ~/scratch -m 2
    2014-04-23 05:41:59 INFO juju.provider.local environprovider.go:40 opening environment "local"
    2014-04-23 05:42:00 INFO juju.state open.go:81 opening state, mongo addresses: ["10.0.3.1:37017"]; entity ""
    2014-04-23 05:42:00 INFO juju.state open.go:133 dialled mongo successfully
    2014-04-23 05:42:00 INFO juju.sos.cmd cmd.go:70 Querying one machine(2)
    2014-04-23 05:42:00 INFO juju.sos.cmd cmd.go:88 Capturing sosreport for machine 2
    2014-04-23 05:42:08 INFO juju.sos main.go:99 Copying archive to "/home/poe/scratch"
    

    Fancy, fancy :)

    Of course this is a work in progress and I have a few ideas of what else to add here, some of those being:

    • Rename the sosreports to match the dns-name of the juju machine
    • Filter sosreport captures based on services
    • Optionally pass arguments to sosreport command in order to filter out specific plugins I want to run, ie

      $ juju sos -d ~/sosreport -- -b -o juju,maas,nova-compute

    As usual contributions are welcomed and some installation instructions are located in the readme