Adapted from ubuntu documentation
sudo snap install microk8s --classic --channel 1.23/stable
mkdir ~/.kube
sudo usermod -a -G microk8s $USER
sudo chown -f -R $USER ~/.kube
sudo microk8s config > ~/.kube/
]]>Adapted from ubuntu documentation
sudo snap install microk8s --classic --channel 1.23/stable
mkdir ~/.kube
sudo usermod -a -G microk8s $USER
sudo chown -f -R $USER ~/.kube
sudo microk8s config > ~/.kube/config
microk8s enable dns dashboard storage
microk8s kubectl expose \
-n kube-system deployment kubernetes-dashboard \
--type=NodePort --name=kubernetes-dashboard-np
# find the external port:
microk8s kubectl get service -n kube-system kubernetes-dashboard-np
it's a long, ugly string:
token=$(microk8s kubectl -n kube-system get secret | grep default-token | cut -d " " -f1)
microk8s kubectl -n kube-system describe secret $token
... now we can login to the host via the nodeport service
microk8s reset
]]>Little known OSX feature (or at least it was new to me):
You can specify a separate DNS resolver configuration per domain. This can be useful to supplement or over-ride VPN connections.
Best of all, it's really easy -- to specify a custom server for mydomain.int
, just
Little known OSX feature (or at least it was new to me):
You can specify a separate DNS resolver configuration per domain. This can be useful to supplement or over-ride VPN connections.
Best of all, it's really easy -- to specify a custom server for mydomain.int
, just run:
sudo mkdir /etc/resolver
sudo echo "nameserver 10.10.10.10 > /etc/resolver/mydomain.int
… and that's it.
It's documented in resolver(5), available on apple's website.
Note: You can't use tbhis to override the default resolver. Use System Preferences for that.
]]>Tested on Solaris 11.1, Netatalk 3.0.4, and OSX 10.8.4
Goal: Back up the household laptops over the network to a solaris fileserver
This seems to work better with AFP rather than SMB, so I installed netatalk 3.0.4 from source.
Note: This was cobbled
]]>Tested on Solaris 11.1, Netatalk 3.0.4, and OSX 10.8.4
Goal: Back up the household laptops over the network to a solaris fileserver
This seems to work better with AFP rather than SMB, so I installed netatalk 3.0.4 from source.
Note: This was cobbled together from information contained in a few other blogs -- I don't pretend to be a netatalk expert.
pkg install developer/gcc-3 system/library/math/header-math \
developer/gnu developer/gnu-binutils \
developer/gnome/gettext libgcrypt openssl
`</pre>
### Download, build and install netatalk:
<pre>`wget .../netatalk-3.0.4.tar.gz
tar xzf netatalk-3.0.4.tar.gz
cd netatalk-3.0.4
./configure --without-ddp --with-init-style=solaris --prefix=/usr/local/netatalk-3.0.4
make
sudo -s # it didn't like running make install under sudo.
install
`</pre>
### Setup PAM for netatalk
I'm not 100% sure why this would be required, I assume it's something to do with netatalk's PAM implementation.
<pre>`cp /etc/pam.d/other /etc/pam.d/netatalk
`</pre>
### Make a ZFS dataset to store our data
… since time machine is designed to eventually consume all space available, a quota is a good idea.
<pre>`zfs create -o mountpoint=/timemachine tank/timemachine
zfs create -o quota=1tb tank/timemachine/tdehesse
chown tdehesse /timemachine/tdehesse
`</pre>
### Setup Netatalk
<pre>`cat > /usr/local/netatalk-3.0.4/etc/afp.conf <<EOF
;
; Netatalk 3.x configuration file
;
[Global]
; Global server settings
mimic model = MacPro
[timemachine_tdehesse]
path=/timemachine/tdehesse
time machine = yes
EOF
`</pre>
### enable autodiscovery related services
<pre>`root@velocity:bin# svcadm enable svc:/network/dns/multicast:default
root@velocity:bin# svcadm enable svc:/system/avahi-bridge-dsd:default
`</pre>
### Start netatalk
Netatalk was kind enough to include SMF functionality for us :)
<pre>`svcadm enable netatalk
svcs netatalk
STATE STIME FMRI
online 11:11:10 svc:/network/netatalk:default
This is really simple -- Since netatalk is advertising the volume as being time-machine compatible, the volume should "just work" and appear in the time machine preferences as a regular volume without any client-side hacks, which should help if we ever need to do a bare-metal recovery.
]]>Tested on solaris 11.1, but should work 10 and up.
Just a quick shell script and cron job to perform a monthly scrub of all pools in your system. Whether this is appropriate or not depends on the frequency of data reads in your system and whether they would
]]>Tested on solaris 11.1, but should work 10 and up.
Just a quick shell script and cron job to perform a monthly scrub of all pools in your system. Whether this is appropriate or not depends on the frequency of data reads in your system and whether they would pick up any potential data corruption or hardware issues.
scrub-all.sh
#!/usr/bin/bash
export PATH=/usr/sbin:/usr/bin
zpool list -H -o name | while read pool; do
zpool scrub $pool
done
`</pre>
Example cron entry to run this monthly:
<pre>`#M H d m w
10 2 1 * * /home/tdehesse/bin/scrub-all.sh
]]>Goal: rename solaris zone name the ZFS dataset oldzonename to renamedzone, from the global zone perspective, and rename the ZFS dataset to match the new name. We'll also rename the non-global zone inside the zone.
Here is the Oracle Documentation
]]>Goal: rename solaris zone name the ZFS dataset oldzonename to renamedzone, from the global zone perspective, and rename the ZFS dataset to match the new name. We'll also rename the non-global zone inside the zone.
Here is the Oracle Documentation on renaming zones, and on renaming a solaris instance.
root@globalzone:~# zoneadm list -vi
ID NAME STATUS PATH BRAND IP
4 oldzonename running /zones/oldzonename solaris excl
`</pre>
Halt the zone:
<pre>`zoneadm -z oldzonename halt
`</pre>
Set the new zone name:
<pre>`root@globalzone:~# zonecfg -z oldzonename
zonecfg:oldzonename> set zonename=renamedzone
zonecfg:renamedzone> set zonepath=/zones/renamedzone
zonecfg:renamedzone> commit
zonecfg:renamedzone> exit
`</pre>
Try Rename the ZFS datasets:
<pre>`root@globalzone:~# zfs rename rpool/zones/oldzonename rpool/zones/renamedzone
cannot rename 'rpool/zones/oldzonename': child dataset with inherited mountpoint is used in a non-global zone
`</pre>
This is because the zone **rpool/zones/oldzonename/rpool** has **zoned=true**
We need to remove that setting before trying again:
<pre>`root@globalzone:~# zfs set zoned=off rpool/zones/oldzonename/rpool
root@globalzone:~# zfs rename rpool/zones/oldzonename rpool/zones/renamedzone
cannot mount 'rpool/zones/renamedzone/rpool/ROOT/solaris-2' on '/': directory is not empty
cannot mount 'rpool/zones/renamedzone/rpool/ROOT/solaris-2/var' on '/var': directory is not empty
root@globalzone:~# zfs set zoned=on rpool/zones/renamedzone/rpool
`</pre>
Boot the zone, check everything works:
<pre>`root@globalzone:~# zoneadm -z renamedzone boot
root@globalzone:~# zlogin renamedzone
root@globalzone:~# zlogin renamedzone
[Connected to zone 'renamedzone' pts/6]
Oracle Corporation SunOS 5.11 11.1 May 2013
root@oldzonename:~#
root@oldzonename:~#
`</pre>
Now rename the zone (from inside the zone):
<pre>`root@oldzonename:~# svccfg -s identity:node setprop config/nodename=renamedzone
root@oldzonename:~# svccfg -s identity:node setprop config/loopback=renamedzone
root@oldzonename:~# svccfg -s identity:node refresh
root@oldzonename:~#
root@oldzonename:~# hostname
renamedzone
]]>If we need to determine which packages have been added to a solaris 11 machine, say because someone is asking for documentation, or you need to build a similar machine/zone, we can use the pkg command's built in history function:
Show everything, from all time:
# pkg history
]]>If we need to determine which packages have been added to a solaris 11 machine, say because someone is asking for documentation, or you need to build a similar machine/zone, we can use the pkg command's built in history function:
Show everything, from all time:
# pkg history -l
Show which base group is installed (small-server, large-server or desktop groups)
# pkg list group/system* NAME (PUBLISHER) VERSION IFO
group/system/solaris-small-server 0.5.11-0.175.0.0.0.2.2576 i--
Show just install operations:
# pkg history -Ho start,command | grep "pkg install" (long output removed)
Now verify these are all still installed (the inverse grep removes the zone install time operations)
# pkg list -Hs pkg history -Ho command |grep install |sed s/.*install// | grep -v -- -z
developer/build/ant Apache Ant
developer/build/make A utility for managing compilations.
diagnostic/top provides a rolling display of top cpu using processes
editor/nano A small text editor
terminal/xterm xterm - terminal emulator for X
x11/diagnostic/x11-info-clients X Window System diagnostic and information display clients
x11/library/libxp libXp - X Print Client Library
x11/session/xauth xauth - X authority file utility
x11/xclock xclock - analog / digital clock for X
Then store this in your documentation, or pipe it to awk '{print $1}'
, or whatever tool of choice to grab the first column.
Solaris 11 has a built-in mechanism to take hourly/daily/weekly snapshots of your ZFS datasets, e.g. rpool/export/home. The service is called "time-slider", which can integrate with a nice GUI (which I don't have installed.
Older snapshots are automatically deleted either when space
]]>Solaris 11 has a built-in mechanism to take hourly/daily/weekly snapshots of your ZFS datasets, e.g. rpool/export/home. The service is called "time-slider", which can integrate with a nice GUI (which I don't have installed.
Older snapshots are automatically deleted either when space is too tight, or they are more than X snapshots old (e.g. by default four weekly snapshots are kept, see the svccfg properties below).
The oracle documentation is not very useful, however.
In short, the required steps are:
# pkg install time-slider
# svcadm restart dbus
# svcadm enable time-slider
`</pre>
We need to specify which datasets will be enabled -- for example our home directories:
<pre>`zfs set com.sun:auto-snapshot=true rpool/export/home
`</pre>
Once we enable time-slider, it will create some auto-snapshot services -- frequent, daily, hourly and weekly. We enable them with:
<pre>`# svcadm enable auto-snapshot:frequent
# svcadm enable auto-snapshot:hourly
# svcadm enable auto-snapshot:daily
# svcadm enable auto-snapshot:weekly
`</pre>
It seems that when we install time-slider, it adds some dbus configuration which necessitates the dbus restart above -- otherwise the time-slider service will fail to come online due to a dbus permission error.
We can tweak the configuration of each service using service properties -- view with listprop, edit with editprop:
<pre>`# svccfg -s auto-snapshot:frequent listprop zfs
zfs application
zfs/interval astring minutes
zfs/keep astring 3
zfs/period astring 15
]]>Solaris 11 is patched differently to previous versions of solaris.
Solaris 11 is installed and patched/updated from "Image Package System" repositories, typically over http/https. Oracle maintains two repositories, one public which is only updated with major releases, and a second repository which contains regular support updates
]]>Solaris 11 is patched differently to previous versions of solaris.
Solaris 11 is installed and patched/updated from "Image Package System" repositories, typically over http/https. Oracle maintains two repositories, one public which is only updated with major releases, and a second repository which contains regular support updates -- this is restricted to customers with a current solaris support contract. See the oracle KB article at the bottom for details on setting up access to oracle's restricted repository.
Major updates (e.g. 11.0 to 11.1) are applied in the same way as support repository updates, but care must be taken to read the instructions as multiple steps may be required (hint: you can't go straight from 11.0 to 11.1)
To List configured publishers -
# pkg publisher
PUBLISHER TYPE STATUS P LOCATION
solaris origin online F https://oracle-oem-oc-mgmt-hostname:8002/IPS/
There is a special "incorporation" package called entire, whose dependencies will cause all packages to be updated to the same level.
We can see the current version of the entire package using:
# pkg list entire
NAME (PUBLISHER) VERSION IFO
entire 0.5.11-0.175.1.0.0.24.2 i--
.. and we can see all available "entire" packages using:
# pkg list -af entire
NAME (PUBLISHER) VERSION IFO
entire 0.5.11-0.175.1.0.0.24.2 ---
entire 0.5.11-0.175.0.12.0.4.0 i--
entire 0.5.11-0.175.0.10.0.5.0 ---
entire 0.5.11-0.175.0.0.0.2.0 ---
Note the "i--
" showing the currently installed version.
To read the version numbers, for example:
0.5.11-0.175.0.12.0.4.0
^--- 12 == Support repository update 12
^----- 0 == Solaris 11 GA (1 would be Solaris 11.1. etc)
^-------- 175 == this was an internal build number during S11 development, can be ignored
^------------ 11 == Solaris 11 (vs 10)
^--------------- 5 == Solaris (vs SunOS)
The last two are kept for legacy compatibility (so 5.8 can be compared to 5.9, 5.11 by scripts which parse the output of uname)
Solaris updates are typically performed on a clone of the current boot environment, similar to "Live upgrade" in previous solaris versions, except better.
The beadm utility will show the current system's available boot environments
# beadm list
BE Active Mountpoint Space Policy Created
-- ------ ---------- ----- ------ -------
s11-sru12 NR / 10.67G static 2012-11-28 17:47
solaris - - 47.12M static 2012-07-09 15:01
solaris-1 - - 14.22M static 2012-09-21 13:18
Note: It is helpful to give boot environments descriptive names.
In this example we see "s11-sru12
" is active "N"ow and on "R"eboot
The pkg command will create new boot environments when performing more major changes such as installing a solaris update, or whenever the appropriate flag is provided.
0: If you have a local repository server, update its contents to the desired release level.
1: Consult oracle's documentation for known issues and to ensure you are aware the current version (tip: search for "Solaris SRU Index")
2: Run pkg utility to create a new, patched boot environment
3: Reboot to use the new boot environment.
# pkg update --accept \
--require-new-be \
--be-name s11-sru12 \
[email protected]
This will clone the current boot environment, and that of any attached zones, apply patches and then wait for you to activate the new boot environment.
This can be done while applications are running, as long as the applications are on their own filesystems.
Activate the new boot environment when ready using:
# beadm activate s11-sru12
# init 6
Everything under rpool/ROOT
will be cloned, while everything not under rpool/ROOT
(e.g. /export/home
) will not be cloned.
This means that application data MUST be kept on a separate filesystem to ensure data is not unexpectedly lost during live upgrade.
Also, starting with Solaris 11.1, certain directories under /var have become symbolic links to /var/SHARE, so that certain directories (e.g. mail) are now no longer boot environment specific.
Consider the following example:
rpool/ROOT/solaris 547G 5.0G 109G 5% /
rpool/ROOT/solaris/var 547G 20G 109G 16% /var
rpool/export 547G 33K 109G 1% /export
rpool/export/home 547G 2.1G 109G 2% /export/home
1pm: Apply patches/support repository update
2pm: Modify files under /export/home and /opt
3pm: Reboot to new boot environment
After the reboot, the changes we made to /export/home are still part of the active boot environment, while changes to /opt (part of root filesystem) are part of the now-inactive boot environment.
Useful oracle documentation:
Oracle Support Document 1433186.1 (Oracle Solaris 11 Image Packaging System (IPS) Frequently Asked Questions (FAQ))
(https://support.oracle.com/epmos/faces/DocumentDisplay?id=1433186.1)
Tested on Solaris 11.1, specifically [email protected]
I couldn't find this documented anywhere, so here's what I found.
How do solaris 11 zones determine which boot environment to use? Seems they use ZFS properties.
On my test
]]>Tested on Solaris 11.1, specifically [email protected]
I couldn't find this documented anywhere, so here's what I found.
How do solaris 11 zones determine which boot environment to use? Seems they use ZFS properties.
On my test zone, I have:
root@myhostname:/etc# beadm list
BE Active Mountpoint Space Policy Created
-- ------ ---------- ----- ------ ------- solaris N / 488.90M static 2013-02-20 15:45
solaris-z R - 60.0K static 2013-02-21 10:15
And here are the ZFS properties:
root@myhostname:/etc# zfs get -r -d 1 -o name,value,source org.opensolaris.libbe:active rpool/ROOT
NAME VALUE SOURCE
rpool/ROOT - -
rpool/ROOT/solaris off local
rpool/ROOT/solaris-z on local
Each zone boot envrionment also has a parentbe property which presumably allows the global zone's pkg command to operate on the correct zone, called org.opensolaris.libbe:parentbe
root@myhostname:/etc# zfs get -r -d 1 -o name,value,source org.opensolaris.libbe:parentbe rpool/ROOT
NAME VALUE SOURCE
rpool/ROOT - -
rpool/ROOT/solaris 89b52c6a-80a0-670d-e178-cb91c57061d3 local
rpool/ROOT/solaris-z 89b52c6a-80a0-670d-e178-cb91c57061d3 local
The parentbe seems to be used to detemine which zones boot environments are unbootable (showing as !R
in the "Active
" column of beadm list
).
We can edit these properties directly, but obviously it's better to use the beadm utility -- however knowing which properties it uses can be useful for manually correcting problems.
Useful documentation includes the solaris(5)
man page.
The following command will generate a basic SMF manifest for a simple daemon process (in my case, the transmission bit torrent program)
$ svcbundle \
-s service-name=application/transmissionbt \
-s start-method="/usr/local/transmission-2.76/bin/transmission-daemon" \
-s model=daemon \
> /tmp/xmiss.xml
.. In my case, I want my
]]>The following command will generate a basic SMF manifest for a simple daemon process (in my case, the transmission bit torrent program)
$ svcbundle \
-s service-name=application/transmissionbt \
-s start-method="/usr/local/transmission-2.76/bin/transmission-daemon" \
-s model=daemon \
> /tmp/xmiss.xml
.. In my case, I want my tranmission daemon to run as a non-root user, so I edit the file and replace:
<instance enabled="true" name="default"/>
with
<instance enabled="true" name="default">
<methodcontext workingdirectory='/var/tmp'> <methodcredential group='staff' user='dluser'/> </methodcontext> </instance>
Finally, import the manifest:
svccfg import /tmp/xmiss.xml
]]>Had a "checkpoint error" when installing a zone:
2012-12-10 00:58:35,549 InstallationLogger ERROR Checkpoint execution error:
2012-12-10 00:58:35,599 InstallationLogger ERROR
2012-12-10 00:58:35,649 InstallationLogger ERROR The following pattern(s) did not match any allowable packages. Try
2012-12-10 00:58:35,
Had a "checkpoint error" when installing a zone:
2012-12-10 00:58:35,549 InstallationLogger ERROR Checkpoint execution error:
2012-12-10 00:58:35,599 InstallationLogger ERROR
2012-12-10 00:58:35,649 InstallationLogger ERROR The following pattern(s) did not match any allowable packages. Try
2012-12-10 00:58:35,700 InstallationLogger ERROR using a different matching pattern, or refreshing publisher information:
2012-12-10 00:58:35,750 InstallationLogger ERROR
2012-12-10 00:58:35,800 InstallationLogger ERROR pkg:///[email protected],5.11-0.175.1.1.0.4.0:20121106T001344Z
2012-12-10 00:58:35,851 InstallationLogger ERROR pkg:/group/system/solaris-small-server
This can occur when the pkg repository is out of date compared to your global zone, but this wasn't my problem -- I was actually using oracle's support repository at the time.
The underlying root cause was a mis-configured application/pkg/system-repository:default service, specifically the config/host SMF property was set to the local hostname, server1 -- while the automated installer seems to assume it's configured on localhost or (127.0.0.1)
Check yours isn't broken by running curl http://localhost:1008 -- you should see a HTTP 404 error. If you get connection refused, you're likely having the same problem I was, or your system-repository service is offline.
NB: I didn't setup this system, but I presume it was set this way out of the box.
]]>Notes for setting up a Solaris 11 AI (Automated Installation) server for booting a SPARC system over the network, then booting it without using DHCP (which might conflict with another DHCP server on the network)
Background reading:
Oracle's Documentation
Blog entry on Sparc installs without DHCP
create our
]]>Notes for setting up a Solaris 11 AI (Automated Installation) server for booting a SPARC system over the network, then booting it without using DHCP (which might conflict with another DHCP server on the network)
Background reading:
Oracle's Documentation
Blog entry on Sparc installs without DHCP
create our service -- this will create the equivalent of a jumpstart boot server.
installadm create-service -n s11sparc_install -a sparc
Or we can use do this from an AI ISO file, per the manual page for installadm:
_
_installadm create-service -n sol-11_1-sparc
-s /export/isos/sol-11_1-ai-sparc.iso
-d /export/images/sol-11_1-sparc
This would allow us, for example, to use an older AI installation -- e.g. if we have an install server running 11.1 but want to install a client running 11.0 for testing.
Note: the install server tells the client which miniroot to download based on the service selection above. This install server needs to match the Solaris version we're installing (for the ultra-simple case, with no criteria set up, our client will look at the default-sparc alias).
To install from a local pkg server rather than oracle's public server, we customise the publisher URL the client will use (Documentation link)
installadm export -n s11sparc_install -m orig_default -o /tmp/localrepo.xml
… near the bottom, specify our local repo mirror -- customise the IP.
<source>
<publisher name="solaris">
<origin name="http://10.1.1.5/solaris/release"/>
</publisher>
</source>
set this as default manifest:
installadm create-manifest -n s11sparc_install -d
-f /tmp/localrepo.xml -m localrepo
installadm list -n s11sparc_install -m
installadm create-manifest -n default-sparc -d
-f /tmp/localrepo.xml -m localrepo
installadm list -n default-sparc -m
Booting with "boot net - install" should now perform a fully automated installation, using the first appropriate disk as an installation target.
"boot net - install aimanifest=prompt" will allow us to specify a manifest XML file at boot time, avoiding the need to worry about manifest selection criteria for very small deployments
... and with "boot net" will boot from the network, and give a prompt to do an interactive installation, or use a shell which we can use as a recovery environment (very useful if things are really broken).
On the client we should be able to boot using something like this for an interactive installation -- note that this won't use the manifest file we created above.
{28} ok show-nets
a) /pci@13,700000/network@0,3
b) /pci@13,700000/network@0,2
c) /pci@13,700000/network@0,1
d) /pci@13,700000/network@0
q) NO SELECTION
Enter Selection, q to quit: c
/pci@13,700000/network@0,1 has been selected.
Type ^Y ( Control-Y ) to insert it in the command line.
{28} ok nvalias net /pci@13,700000/network@0,1
{28} ok
{28} ok setenv network-boot-arguments host-ip=10.1.1.10,router-ip=10.1.1.5,subnet-mask=255.255.255.0,file=http://10.1.1.5:5555/cgi-bin/wanboot-cgi
{28} ok boot net -sv
Resetting...
_[long output clipped] _
Configuring devices.
Hostname: solaris
Requesting System Maintenance Mode
SINGLE USER MODE
Enter user name for system maintenance (control-d to bypass): root
Enter root password (control-d to bypass):
single-user privilege assigned to root on /dev/console.
Entering System Maintenance Mode
Dec 4 01:53:31 su: 'su root' succeeded for root on /dev/console
Oracle Corporation SunOS 5.11 11.1 September 2012
root@solaris:~# exit
… system presents a menu offering to install etc.
Fully automated install testing can wait for another day.
]]>How to
Goal:
How to
Goal:
Environment:
Windows 2003 seems similar from various bits of documentation and should be very similar, as is solaris 10, but that's not what I've used here.
This is a quick summary of steps. Other people have written this up better than I have, for example here: http://blog.scottlowe.org/2007/04/25/solaris-10-ad-integration-version-3/
Or here with http://www.seedsofgenius.net/solaris/solaris-authentication-login-with-active-directory
Install "Identity Management for Unix" Server Role on your AD Controllers:
Open Server Manager,
Open the "Identity Management for Unix" program under administrative tools, then create a new NIS domain.
Populate Unix attributes on the relevant users
Now you need to populateunix related attributes -- uid, home directory (use /home/$uid ) etc under the unix tab of each user who should be able to log in.
Configuring active directory name service:
In this example:
dsquery user -name Unix*
commandWarning: this command will overwrite your nsswitch.conf file!
ldapclient manual \
-a authenticationMethod=simple \
-a defaultSearchBase=DC=domain,DC=name,DC=local \
-a credentialLevel=proxy \
-a proxyDN="cn=Unix LDAP Search,ou=users,DC=domain,DC=name,DC=local" \
-a domainName=domain.name.local \
-a proxyPassword=secret123 \
-a defaultSearchScope=sub \
-a attributeMap=passwd:gecos=cn \
-a attributeMap=passwd:homedirectory=unixHomeDirectory \
-a attributeMap=shadow:userpassword=unixUserPassword \
-a objectClassMap=group:posixGroup=group \
-a objectClassMap=passwd:posixAccount=user \
-a objectClassMap=shadow:shadowAccount=user \
-a serviceSearchDescriptor=passwd:ou=users,DC=domain,DC=name,DC=local?sub \
-a serviceSearchDescriptor=group:ou=Unix\ Groups,DC=domain,DC=name,DC=local?sub \
-a defaultServerList="10.1.1.21 10.1.1.22"
Using this data in the name service switch:
Add "ldap" to your "passwd" and "group" map lookups. Here's my file:
passwd: files ldap
group: files ldap
hosts: files dns
ipnodes: files dns
networks: files
protocols: files
rpc: files
ethers: files
netmasks: files
bootparams: files
publickey: files
netgroup: files
automount: files
aliases: files
services: files
printers: user files
project: files
auth_attr: files
prof_attr: files
tnrhtp: files
tnrhdb: files
sudoers: files
Apply with:
svcadm restart name-service/switch svcadm restart name-service/cache
Verify it's working
At this point, users should be in the passwd database (as if they were in the local /etc/passwd file), but they won't be able to log in because we haven't setup the PAM configuration yet.
On solaris, run getent passwd
and you should see LDAP related entries.
If you need to tweak anything, remember to restart the name service cache.
Here's my /etc/pam.conf (it could just as easily be multiple files in /etc/pam.d/*)
#
# Sourced from http://docs.oracle.com/cd/E23824_01/html/821-1455/schemas-250.html#scrolltoc
# on 28 NOV 2012
#
# Authentication management
#
# login service (explicit because of pam_dial_auth)
#
login auth requisite pam_authtok_get.so.1
login auth required pam_dhkeys.so.1
login auth required pam_unix_cred.so.1
login auth required pam_dial_auth.so.1
login auth binding pam_unix_auth.so.1 server_policy
login auth required pam_ldap.so.1
#
# SSH public key shoudn't check LDAP/AD account lock status, but should use it
# for uid/home dir etc (pam_unix*)
#
sshd-pubkey auth requisite pam_authtok_get.so.1
sshd-pubkey auth required pam_dhkeys.so.1
sshd-pubkey auth required pam_unix_cred.so.1
sshd-pubkey auth required pam_unix_auth.so.1
sshd-pubkey auth required pam_dial_auth.so.1
sshd-pubkey account requisite pam_roles.so.1
sshd-pubkey account required pam_unix_account.so.1
#
# rlogin service (explicit because of pam_rhost_auth)
#
rlogin auth sufficient pam_rhosts_auth.so.1
rlogin auth requisite pam_authtok_get.so.1
rlogin auth required pam_dhkeys.so.1
rlogin auth required pam_unix_cred.so.1
rlogin auth binding pam_unix_auth.so.1 server_policy
rlogin auth required pam_ldap.so.1
#
# rsh service (explicit because of pam_rhost_auth,
# and pam_unix_auth for meaningful pam_setcred)
#
rsh auth sufficient pam_rhosts_auth.so.1
rsh auth required pam_unix_cred.so.1
rsh auth binding pam_unix_auth.so.1 server_policy
rsh auth required pam_ldap.so.1
#
# PPP service (explicit because of pam_dial_auth)
#
ppp auth requisite pam_authtok_get.so.1
ppp auth required pam_dhkeys.so.1
ppp auth required pam_dial_auth.so.1
ppp auth binding pam_unix_auth.so.1 server_policy
ppp auth required pam_ldap.so.1
#
# Default definitions for Authentication management
# Used when service name is not explicitly mentioned for authentication
#
other auth required pam_authtok_get.so.1
other auth required pam_dhkeys.so.1
other auth required pam_unix_cred.so.1
other auth binding pam_unix_auth.so.1 server_policy
other auth required pam_ldap.so.1
#
# passwd command (explicit because of a different authentication module)
#
passwd auth binding pam_passwd_auth.so.1 server_policy
passwd auth required pam_ldap.so.1
#
# cron service (explicit because of non-usage of pam_roles.so.1)
#
cron account required pam_unix_account.so.1
#
# Default definition for Account management
# Used when service name is not explicitly mentioned for account management
#
other account requisite pam_roles.so.1
other account binding pam_unix_account.so.1 server_policy
other account required pam_ldap.so.1
#
# Default definition for Session management
# Used when service name is not explicitly mentioned for session management
#
other session required pam_unix_session.so.1
#
# Default definition for Password management
# Used when service name is not explicitly mentioned for password management
#
other password required pam_dhkeys.so.1
other password requisite pam_authtok_get.so.1
other password requisite pam_authtok_check.so.1
other password required pam_authtok_store.so.1 server_policy
#
# Support for Kerberos V5 authentication and example configurations can
# be found in the pam_krb5(5) man page under the "EXAMPLES" section.
#
Configure PAM to allow SSH key logins while LDAP is enabled
You may have noticed that the pam.conf file above has a non-LDAP configuration for the sshd-pubkey service. This is because it seems that the default ("other") PAM service will use the ldap PAM module which attempts to confirm an account isn't locked. This isn't possible when authenticating with a simple LDAP bind, because the SSH daemon doesn't have access to the user's password and therefore can't verify whether the user's account is locked.
See the "sshd-pubkey" lines in the example above.
Site note: Oracle's documentation claims this isn't possible, but it works fine just fine -- however there is a compromise: if you lock a user account they will still be able to log in using SSH keys!
Full credit: this is based on the following blog post: http://znogger.blogspot.com.au/2010/05/solaris-automatic-creation-of-home-dirs.html
Solaris' automounter has a feature called "Executable maps" -- we can specify a shell script which will create a directory on the access attempt. Here it is:
#!/bin/bash #
Automounter executable script.
This script must be referenced in file /etc/auto_master in order
to have any effect. Furthermore it must have the sticky bit set.
#
This script receives as $1 the name of the object (file or directory)
that is being accessed under the moint point. For example if the
moint point is /home and someone is accessing /home/johndoe then
$1 will be "johndoe".
The script is expected to write the path of the physical location
on stdout.
------------------------------------------------------------------------------
Customizations
Path of our mount point. This must be the same as the name
in the first column in the file /etc/auto_master.
HOMEDIRPATH=/home
Where the physical user home directories are
PHYSICALDIRPATH=/export/home
The group owner to give to a user's home directory
HOMEDIRGROUP="staff"
------------------------------------------------------------------------------
hdir=$HOMEDIRPATH/$1
Sanity check
getent passwd $1 > /dev/null if [ $? -ne 0 ]; then exit fi
Now we know that $1 is a valid user with a home directory
under $HOMEDIRPATH
Next see if the user's physical home dir exist. If not create it.
phdir="$PHYSICALDIRPATH/$1"
if [ -d "$phdir" ]; then
Yes the user's physical home dir already exist.
Return the path of the physical home dir to the automounter
and exit.
echo "localhost:$phdir" exit fi
cp -r /etc/skel/ $phdir/
Set owner and group
chown -R "$1":"$HOMEDIRGROUP" $phdir
Return the path of the physical home dir to the automounter
and exit.
echo "localhost:$phdir"
exit
This script needs to be used by /etc/auto_master -- here's mine:
cat > /etc/auto_master << HOMEEOF #
Copyright (c) 1992, 2011, Oracle and/or its affiliates. All rights reserved.
#
Master map for automounter
# +automaster /net -hosts -nosuid,nobrowse /home /etc/autohomedir /nfs4 -fedfs -ro,nosuid,nobrowse
HOMEEOF
Don't forget to put a sensible .bash_profile and .bashrc into /etc/skel
]]>ipadm create-ip net0
ipadm create-ip net5
ipadm create-addr -T static -a 10.1.1.122/24 net0/test
ipadm create-addr -T static -a 10.1.1.123/24 net5/test
ipadm create-ipmp -i net0 -i net5 admipmp0
ipadm create-addr -T static -a 10.1.1.121/24 admipmp0/v4
ipadm
]]>ipadm create-ip net0
ipadm create-ip net5
ipadm create-addr -T static -a 10.1.1.122/24 net0/test
ipadm create-addr -T static -a 10.1.1.123/24 net5/test
ipadm create-ipmp -i net0 -i net5 admipmp0
ipadm create-addr -T static -a 10.1.1.121/24 admipmp0/v4
ipadm show-addr
ipadm show-if
# add test target
route -p add 10.1.1.1 10.1.1.1 -static
]]>OSX 10.7 and 10.8 have some issues talking to OpenLDAP and 389 (Red Hat Directory Server), because the server advertises non-plain BINDs in the root DSE entry, but these binds don't work when your user entries have hashed passwords (which they should).
PS: 10.7.0
]]>OSX 10.7 and 10.8 have some issues talking to OpenLDAP and 389 (Red Hat Directory Server), because the server advertises non-plain BINDs in the root DSE entry, but these binds don't work when your user entries have hashed passwords (which they should).
PS: 10.7.0 and 10.7.1 are horribly broken and should not be used by anyone for LDAP related work. 10.8.0 seems okay at first glance.
Shamelessly borrowing from the following blog entry
http://blog.smalleycreative.com/administration/fixing-openldap-authentication-on-os-x-lion/
....however I prefer the following shell:
for plist in /Library/Preferences/OpenDirectory/Configurations/LDAPv3/*.plist
do
for method in CRAM-MD5 NTML GSSAPI
do
/usr/libexec/PlistBuddy -c "add ':module options:ldap:Denied SASL Methods:' string $method" $plist
done
done
killall opendirectoryd # this refreshes cache