## Note: this is my personal todo and ideas list (alexh@) * cryptdisks - Improve to support external scripts/programs providing passphrases * linuxulator - port to x86_64 - separate out common arch parts (linprocfs, for example) * Update cryptsetup * Fix the crash analysis script (or rather the programs it calls) * route show * Take a look at updating lvm/dm/libdm * sync up vr o Added VT6105M specific register definitions. VT6105M has the following hardware capabilities. - Tx/Rx IP/TCP/UDP checksum offload. - VLAN hardware tag insertion/extraction. Due to lack of information for getting extracted VLAN tag in Rx path, VLAN hardware support was not implemented yet. - CAM(Content Addressable Memory) based 32 entry perfect multicast/ VLAN filtering. - 8 priority queues. o Implemented CAM based 32 entry perfect multicast filtering for VT6105M. If number of multicast entry is greater than 32, vr(4) uses traditional hash based filtering. * rip out the disk partitioning from the disk subsystem and implement it in a more general fashion - crazy idea: as dm targets with an auto-configuration option! * sync some more opencrypto from OpenBSD * ATA (automatic) spindown (see FreeBSD current) * Update callout http://svn.freebsd.org/viewvc/base?view=revision&revision=127969 * inv ctxsw rusage - see irc logs - some incorrect accounting going on, don't remember details :) * unionfs update - make it work without whiteout ###Boring: * RedZone, a buffer corruption protection for the kernel malloc(9) facility has been implemented. - This detects both buffer underflows and overflows at runtime on free(9) and realloc(9), and prints backtraces from where memory was allocated and from where it was freed. - see irc log below. * port uart driver (?) * port wscons (?) or update syscons - probably way too much effort (wscons) * port usb4bsd - wrapper is included for userland; should be easy to port - http://svn.freebsd.org/viewvc/base?view=revision&revision=184610 - http://turbocat.net/~hselasky/usb4bsd/ - http://gitweb.dragonflybsd.org/~polachok/dragonfly.git/shortlog/refs/heads/usb2 * suspend/resume for SMP x86 - http://lists.freebsd.org/pipermail/freebsd-acpi/2008-May/004879.html * AMD64 suspend/resume - http://svn.freebsd.org/viewvc/base?view=revision&revision=189903 * text dumps [alexh@leaf:~/home] $ roundup-server -p 8080 bt=bugtracker
-05:48- :        dillon@: no, double frees to the object cache are nasty.  It can't detect them.  the object 
                          winds up in the magazine array twice
-05:48- :        dillon@: (and possibly different magazines, too)
-05:49- :         alexh@: can't I just write some magic to a free object on the first objcache_put and check 
                          if it's there on objcache_put?
-05:49- :         alexh@: and clear it on objcache_get, anyways
-05:50- :        dillon@: no, because the object is still may have live-initialized fields
-05:50- :        dillon@: because it hasn't been dtor'ed yet (one of the features of the objcache, to avoid 
                          having to reinitialize objects every time)
-05:50- :        dillon@: the mbuf code uses that feature I think, probably other bits too
-05:51- :        dillon@: theoretically we could allocate slightly larger objects and store a magic number at 
                          offset [-1] or something like that, but it gets a little iffy doing that
-05:52- :        dillon@: the objcache with the objcache malloc default could probably do something like that 
                          I guess.
-05:52- :        dillon@: I don't consider memory tracking to be a huge issue w/ dragonfly, though I like the 
                          idea of being able to do it.  It is a much bigger problem in FreeBSD due to the 
                          large number of committers 


-05:55- :        dillon@: For the slab allocator you may be able to do something using the Zone header.
-05:55- :        dillon@: the slab allocator in fact I think already has optional code to allocate a tracking 
                          bitmap to detect double-frees
-05:56- :        dillon@: sorry, I just remembered the bit about the power-of-2 allocations
-05:56- :        dillon@: for example, power-of-2-sized allocations are guaranteed not only to be aligned on 
                          that particular size boundary, but also to not cross a PAGE_BOUNDARY (unless the 
                          size is > PAGE_SIZE)
-05:57- :        dillon@: various subsystems such as AHCI depend on that behavior to allocate system 
                          structures for which the chipsets only allow one DMA descriptor.
-05:59- :         alexh@: http://svn.freebsd.org/viewvc/base/head/sys/vm/redzone.c?view=markup&pathrev=155086 
                          < this is redzone. it basically calls redzone_addr_ntor() to increase the size in 
                          malloc(), and then redzone_setup() just before returning the chunk
-06:02- :        dillon@: jeeze. that looks horrible.
-06:03- :         alexh@: I don't quite get that nsize + redzone_roundup(nsize)
-06:03- :        dillon@: I don't get it either.  It would completely break power-of-2-sized alignments in the 
                          original request
-06:04- :        dillon@: hmmm.  well, no it won't break them, but the results are oging to be weird
-06:04- :        dillon@: ick.

-06:15- :        dillon@: if the original request is a power of 2 the redzone adjusted request must be a power 
                          of 2
-06:15- :        dillon@: basically
-06:16- :        dillon@: so original request 64, redzone request must be 128, 256, 512, 1024, etc.
-06:16- :         alexh@: yah, k
-06:16- :        dillon@: original request 32, current redzone code would be 32+128 which is WRONG.
-06:16- :         alexh@: how big is PAGE_SIZE ?
-06:16- :        dillon@: 4096 on i386 and amd64
-06:17- :         alexh@: and one single malloc can't be bigger than that?
-06:17- :        dillon@: I'm fairly sure our kmalloc does not guarantee alignment past PAGE_SIZE (that is, 
                          the alignment will be only PAGE_SIZE eve if you allocate PAGE_SIZE*2)
-06:17- :        dillon@: a single kmalloc can be larger then PAGE_SIZe
-06:18- :        dillon@: it will use the zone up to around 1/2 the zone size (~64KB I think), after which it 
                          allocates pages directly with the kernel kvm allocator
-06:18- :        dillon@: if you look at the kmalloc code you will see the check for oversized allocations
-06:18- :         alexh@: yah, saw that
-06:18- :         alexh@: "handle large allocations directly"
-06:19- :         alexh@: not sure how to do this, really, as the size is obviously also changed in 
                          kmem_slab_alloc
-06:20- :         alexh@: but kmem_slab_alloc isn't called always, is it?
-06:20- :         alexh@: only if the req doesn't fit into an existant zone
-06:20- :        dillon@: right
-06:20- :        dillon@: you don't want to redzone the zone allocation itself