Planet Primates

April 19, 2014


Ich werde ja hier bei Sicherheitslücken-Posts regelmäßig ...

Ich werde ja hier bei Sicherheitslücken-Posts regelmäßig von ein paar Verstrahlten heimgesucht, die mir zu erklären versuchen, dass das alles Schuld von C sei, nicht der Programmierer oder weil die Aufgabe halt komplex ist. Na dann gucken wir doch mal, was passiert, wenn man Crypto mit Java macht. Oh. Der Schlüssel leakt, weil OO-Programmierung mit Exception-Handling einen Timing-Side-Channel exponiert? Na sowas!

Fakt ist: OpenSSL ist nicht in C, weil C so geil ist. OpenSSL ist in C, weil das Ziel ist, dass das von jeder Programmiersprache aus benutzbar ist. Und alle können sie C-Code aufrufen.

Fakt ist auch: Das ist ein komplexes Problem, SSL richtig zu implementieren. Und wenn man dann noch den Anspruch hat, alle Obskuro-Plattformen und alle Extensions zu unterstützen, dann ist das Ergebnis halt Scheiße. Wenn man sich anguckt, was OpenBSD da gerade mit der flammenden Machete alles aus OpenSSL herausoperiert, da wird einem ganz anders. Unter anderem Support für Big Endian x86_64 Plattformen. Hat jemals schon mal jemand von sowas gehört? Nicht? Wie ist es mit VMS? EBCDIC-Support? Support für "ungerade" Wortlängen? You name it, OpenSSL supports it.

Dieses Java-Ding ist übrigens auch in einer Dissertation von einem der Beteiligten drin. Die kann man hier lesen (208 Seiten).

April 19, 2014 02:02 PM

Medikamentenengpässe, sowas erwartet man vielleicht ...

Medikamentenengpässe, sowas erwartet man vielleicht von Afrika oder vielleicht Indien, die stampfen da ja extra eigene Generika-Herstellung aus dem Boden deshalb. Weniger bekannt ist, dass das auch in Deutschland ein echtes Problem ist. Ursache ist (neben konkreten Anlässen wie dass Glaxosmithkline in ihrer Fabrik in Belgien irgendwelche Probleme hat), dass der Markt im Wesentlichen zu Monopolen degeneriert ist.
Für die Deutsche Akademie für Kinder- und Jugendmedizin sind solche Lieferengpässe Ausdruck eines grundsätzlichen Problems: die Monopolisierung auf dem Markt der Impfstoffproduktion. Es gebe immer weniger Hersteller, die Produktvielfalt gehe zurück. Die Folge: Bei Lieferengpässen oder einer Marktrücknahme könne oftmals nicht auf ein Alternativpräparat zurückgegriffen werden, und es komme zu einer aus ethischer Sicht bedenklichen Priorisierung.
Wie gruselig ist DAS denn bitte!? Au weia.

Jetzt frage ich mich ja gerade, ob es da Bunkerkäufe von Spekulanten gab, oder ob es da wirklich schon per se nicht genug Rücklagen gibt, um die Nachfrage für die nächsten Monate zu befriedigen? Das kann ja wohl nicht wahr sein!

April 19, 2014 02:02 PM


How to setup CLJS + Emacs with a REPL for client-side dev only

How do I setup Emacs and CLJS

I'm working on a d3 app, only client-side code (ATM). I'm trying to setup a REPL that connects with the browser in Emacs -> I found cider, and austin. Not sure how they connect, if at all. Internet research shows that all projects involving cljs have cljsbuild in the project.clj. Not sure how that connects with cider and austin too.

I just want to write code, send it to eval to the repl and get going. I'd rather that then compiling the CLJS each time I save the file and refreshing the browser afterwards.

Scourging the internets in the past 3 days for a tutorials, guides, documentation and blog posts, trying I don't know how many solutions didn't yield anything. Just couldn't make this happen on my own.

How do I get this up?

submitted by xwildeyes
[link] [comment]

April 19, 2014 02:00 PM


How to prove correctness of BFS algorithm

How do we prove the correctness of BFS of DFS algorithms for finding connected components in the graph?

I have came up with a traversal algorithm which is very similar to these algorithms, but I need to prove its correctness.

Any reference would be appreciated.

by emab at April 19, 2014 01:59 PM


Solved: How to convert list of hashmaps into one hasmap in clojure?

I have a list which looks like this:

({:course 2, :mark 9} {:course 5, :mark 8} {:course 6, :mark 10})

And i want to convert it to hashmap:

{:2 9 :5 8 :6 10}

List was created from mysql database, i dont know can i get that datas from database in some other format, which will be easier to convert to one hashmap, i used java.jdbc query function.

Can anybody help me?

EDIT: I solved this by

(let [myHashMap (into {} (map #(hash-map (keyword (str (first %))) (first (rest %))) (map rearrangeList listOfMarksFromDatabase))) ])

by user3549602 at April 19, 2014 01:58 PM

How to run specifications sequentially

I want to create few specifications that interoperate with database.

class DocumentSpec extends mutable.Specification with BeforeAfterExample {

  def before() = {createDB()}
  def after() = {dropDB()}

  // examples
  // ...

Database is created and dropped before and after every example (which is executed sequentially). Everithing works as expected until there is only one spec that works with database. Because specifications are executed parallel, they interfere and fail.

I hope that I'm able to avoid this by instructing specs2 to run tests with side effects sequentially while keeping side effect free tests to run in parallel. Is it possible?

by Jeriho at April 19, 2014 01:56 PM

Overcoming Bias

The Up Side Of Down

In her new book, The Up Side of Down: Why Failing Well Is the Key to Success, Megan McArdle takes some time to discuss forager vs. farmer attitudes toward risk.

Forager food sources tended to be more risky and variable, while farmer food sources are more reliable. So foragers emphasized food sharing more, and a tolerate attitude toward failure to find food. In contrast, farmers shared food less and held individuals responsible more for getting their food. We’ve even seen the same people switch from one attitude to the other as they switched from foraging to farming. Today some people and places tend more toward farmer values of strict personal responsibility, while other people and places tend more toward forager forgiveness.

McArdle’s book is interesting throughout. For example, she talks about how felons on parole are dealt with much better via frequent reliable small punishments, relative to infrequent random big punishments. But when it comes to bankruptcy law, a situation where the law can’t help but wait a long time to respond to an accumulation of small failures, McArdle favors forager forgiveness. She points out that this tends to encourage folks who start new businesses, which encourages more innovation. And this does indeed seem to be a good thing.

Folks who start new businesses are pretty rare, however, and it is less obvious to me that more leniency is good overall. It is not obvious that ordinary people today face more risk than did most farmers during the farming era. The US apparently has the most lenient bankruptcy law in the world, and that is indeed some evidence for its value. However, it seems to me more likely that US forager forgiveness was likely caused by US wealth. McArdle tells us that the US got lenient bankruptcy in the late 1800s via lobbying by senators representing western farmers in debt to eastern banks. And it is hard to see how farming in the US west was more risky than has been farming throughout the whole farming era.

Most likely what changed was the wealth of US farmers, and their new uppity attitudes toward rich elites. This fits with debt-forgiveness being a common liberal theme, which fits with liberal attitudes being more forager-like, and becoming more common as rising wealth cut the fear that made farmers. If lenient bankrupts is actually better for growth in our world, this seems another example of Caplan’s idea trap, where rising wealth happens to create better attitudes toward good policy.

Overall I found it very hard to disagree with anything that McArdle said in her book. If you know me, that is quite some praise. :)

by Robin Hanson at April 19, 2014 01:50 PM


How is the default probability implied from market implied CDS spreads for CVA/DVA calculation?

From point 38 on P.17 the default probability can be implied from market implied CDS spreads. "Macro Surface" method is mentioned, but I cannot get any clue of what it is? Where do I get the acedemic reference for that?

Also what is the commonly used methodology to imply default probability for CVA/DVA calculation?

The article "Credit and Debit Valuation Adjustment" can be seen in

by Dennis at April 19, 2014 01:49 PM


Proving a language (ir)regular (standard methods have failed)

I'm currently trying to prove a language regular (for personal amusement). The language is:

The language containing all numbers in ternary that have even bit-parity when encoded in binary.

Now, I've currently tried a few different approaches that have not led me to any success. I've tried using the pumping lemma (couldn't find anything to pump on), Myhill-Nerode (similar) and even counted the number of strings of each length for which the statement is true (my intuition is that it checks out with a probabilistic argument).

Are their any other approaches that might help here, or are there any intuitions that might be helpful? At this point, my best guess is that the language is not regular, but I don't seem to be able to come up with an explanation.

by James at April 19, 2014 01:47 PM


Play test-only doesn't ignore tests

I have a play application and I need to ignore my functional tests when I compile and build, then later run only the integration tests.

This is my test code:


class ApplicationSpec extends Specification {

   "Application" should {
      "send 404 on a bad request" in new WithApplication {
         route(FakeRequest(GET, "/boum")) must beNone



class IntegrationSpec extends Specification {

    "Application" should {
       "work from within a browser" in {
           running(TestServer(9000), FIREFOX) { browser =>
               browser.pageSource must contain("Your new application is ready.")
    } section "integration"

The docs tells me I can use something like this from the command line:

play "test-only -- exclude integration"

The only problem is that this doesn't actually exclude any tests and my integration tests invoke firefox and start running. What am I doing wrong? How can I exclude the integration tests and then later run them by themselves?

by usmcs at April 19, 2014 01:43 PM

Using Scala class defined in package object from Java

The following Scala example defines a class inside a package object:

package com.mycompany

package object test {
  class MyTest {
    def foo(): Int = {

The following three classes are generated:


The problem arises when trying to use the MyTest class from Java. I think that since package$MyTest contains a $ in the name, Java is not acknowledging its existence. Nevertheless, the package$ class is accessible.

Running javap on package$MyTest.class returns:

Compiled from "Test.scala"
public class com.mycompany.test.package$MyTest {
  public int foo();
  public com.mycompany.test.package$MyTest();

I've tried accessing the class using Eclipse, Intellij and Netbeans, without success. Is it possible to use Scala classes defined in package objects from Java?

by Georgi Khomeriki at April 19, 2014 01:42 PM

Is there some ways to avoid this kind of duplicated code in clojure?

I got some parameters and then call a function which accept a map of parameters. The key's names of the map are the params' names,like this:

   (GET "/api/search" [nick_name gender phone max_age min_age page lmt ]
        (db-search-users :nick_name nick_name :gender gender :phone phone
:max_age max_age :min_age min_age :page page :lmt lmt))

is there some way to avoid the copy and paste?

by user2219372 at April 19, 2014 01:30 PM

DragonFly BSD Digest

In Other BSDs for 2014/04/19

I’ve got “coverage” of most every BSD this week.

by Justin Sherrill at April 19, 2014 01:30 PM


leftReduce Shapeless HList of generic types

This is essentially what I want:

case class Foo[T](x: T)

object combine extends Poly {
  implicit def caseFoo[A, B] = use((f1: Foo[A], f2: Foo[B]) => Foo((f1.x, f2.x)))

def combineHLatest[L <: HList](l: L) = l.reduceLeft(combine)

So combineHLatest(Foo(1) :: Foo("hello") :: HNil) should yield Foo( (1, "hello") )

The above doesn't compile as it cannot find an implicit LeftReducer but I'm at a loss as to how to implement one.

by Channing Walton at April 19, 2014 01:15 PM


Calculating exact/approximate solution to a formula

Suppose we have a set of variable $\mathbf{y} = \left(y_1, ..., y_n \right)$. Also consider the set of functions $g_i(y_i), 1 \leq i \leq n$. Note that $g_i()$ is dependent only on $y_i$.

Consider the program: $$ \sum_{\mathbf{y} \in C} \prod_{i=1}^n g_i(y_i) $$ Where $C$ is the feasible space for $\mathbf{y}$. The question is how to find this equation(or an approximation to it) efficiently?

Note that naively computing this is $m^n$ (assuming that each $y_i$ can take $m$ possible values). To make it more clear consider the following example: $$ \sum_{(y_1, y_2) \in C} g_1(y_1) g_2(y_2) $$

You can make the following assumptions (but not limited to):

  • $\mathcal{C}$ is be represented with a linear constraint: $$ A\mathbf{y} \leq b $$
  • $g_i(y_i)$ is bounded above with some number $M$.
  • g_i(y_i) is a convex/super-convex function
  • Or any other necessary assumption that can help to solve or approximate this.

Update1: It might be useful to know that $$ \ln \sum_{\mathbf{y} \in C} \prod_{i=1}^n g_i(y_i) \leq \sum_{\mathbf{y} \in C} \ln \prod_{i=1}^n g_i(y_i) = \sum_{\mathbf{y} \in C} \sum_{i=1}^n \ln g_i(y_i) $$

by Daniel at April 19, 2014 01:09 PM

Fred Wilson

Video Of The Week: The Gotham Gal on TWIST

Last summer, The Gotham Gal went on Jason Calacanis’ show, This Week In Startups. I had never watched it until this morning. It’s fun to see two people who know each other well (they worked together in the late 90s) do a conversation. It’s an hour long but there is some good stuff in here.

by Fred Wilson at April 19, 2014 01:02 PM


Kurzer Redebeitrag der CDU/CSU im EU-Parlament:"Ich ...

Kurzer Redebeitrag der CDU/CSU im EU-Parlament:
"Ich kann nicht akzeptieren, dass wir keinen Entwurf mehr bekommen sollen", sekundierte Weber der CDU-Abgeordnete Axel Voss. Die Bedrohungslage sei "eher noch größer geworden" seit der Kernarbeit an der Richtlinie 2005, als viele Politiker noch unter den Anschlägen auf den öffentlichen Nahverkehr in Madrid und London standen. Neben dem Recht auf Freiheit gebe es auch eins auf Sicherheit in der Charta der Grundrechte, unterstrich Voss. Zudem sei die Strafverfolgung ohne Verbindungs- und Standortdaten "in vielen tausend Fällen nicht mehr möglich". Der Christdemokrat warf daher die alternative Frage auf, ob "wir zu Selbstjustiz übergehen sollen".
Tolle Idee! Genau so machen wir das! Wir sperren am besten die CDU-Fraktion weg und berufen uns auf ihr selbst vorgetragenes Recht auf Sicherheit und die von ihr selbst vorgeschlagene Selbstjustiz als Methode.

April 19, 2014 01:01 PM


How to get into the field of AI?


I am a first year EE/CS double major interested in AI and machine learning and would like to get involved in the field.

I would like to know which undergrad courses would prepare me best for the field of AI. My program is very flexible, I can do subjects from EE, CS, maths or economics. So it is up to me to structure my degree. I am thinking about doing the following:

Maths: graph theory and linear algebra, multivariable calculus, differential equations, real analysis, complex analysis, discrete maths and probability and statistics. CS: design and analysis of algorithms, algorithms and data structures, object oriented programming, database systems, theory of computation, computer systems, intro to artificial intelligence and machine learning, software simulations and CS project. EE: all of them. Economics: 4 subjects in economics.

Do you think these courses are a good way to prepare myself for AI? What else should I do?

I am very passionate about AI and have been interested in the field for a long time. I have read book "AI - A Modern Approach" and it made me love the field ever more.

Would very much appreciate your advice!

submitted by member2357
[link] [1 comment]

April 19, 2014 12:52 PM


Infer multiple generic types in an abstract class that should be available to the compiler

I am working on an abstract CRUD-DAO for my play2/slick2 project. To have convenient type-safe primary IDs I am using Unicorn as additional abstraction and convenience on top of slicks MappedTo & ColumnBaseType.

Unicorn provides a basic CRUD-DAO class BaseIdRepository which I want to further extend for project specific needs. The signature of the class is

class BaseIdRepository[I <: BaseId, A <: WithId[I], T <: IdTable[I, A]]
  (tableName: String, val query: TableQuery[T])
  (implicit val mapping: BaseColumnType[I])
  extends BaseIdQueries[I, A, T]

This leads to DAO implementations looking something like

class UserDao extends 
  BaseIdRepository[UserId, User, Users]("USERS", TableQuery[Users])

This seems awfully redundant to me. I was able to supply tableName and query from T, giving me the following signature on my own Abstract DAO

abstract class AbstractIdDao[I <: BaseId, A <: WithId[I], T <: IdTable[I, A]] 
  extends BaseIdRepository[I,A,T](TableQuery[T].baseTableRow.tableName, TableQuery[T])

Is it possible in Scala to somehow infer the types I and A to make a signature like the following possible? (Users is a class extending IdTable)

class UserDao extends AbstractIdDao[Users]

Is this possible without runtime-reflection? If only by runtime-reflection: How do i use the Manifest in a class definition and how big is the performance impact in a reactive Application?

Also, since I am fairly new to the language and work on my own: Is this good practice in scala at all?

Thank you for help. Feel free to criticize my question and english. Improvements will of course be submitted to Unicorn git-repo

EDIT: Actually, TableQuery[T].baseTableRow.tableName, TableQuery[T] does not work due to the error class type required but T found, IDEA was superficially fine with it, scalac wasn't.

by Floscher at April 19, 2014 12:41 PM

How to upgrade Scala to a newer version from the command line?

Is there away to upgrade the installed Scala version via sbt / other command line tool?

I'm sure there is a way, but I couldn't find any after a quick search, am I missing anything?

by Eran Medan at April 19, 2014 12:32 PM


What is an appropriate textbook for this mathematics for CS course?

I'm using this course to help learn mathematics for Computer Science. Even the first couple of lectures have made me see maths in a different light. I find it to be totally awesome and interesting now. Problem is, for some subjects, I feel like a textbook would be useful. I can't find one on the site, and I don't know where I could find one, so I'm asking you guys. Thanks!

submitted by a-single-tear
[link] [5 comments]

April 19, 2014 12:31 PM


Calculating instantaneous forward rate from zero-coupon yield curve

I have a big dataset containing zero-coupon bond yields with different relative maturities. I fix a time horizon on my dataset and I want to calculate instantaneous forward rate. I'm going to write how I calculated:

The yield curve is given by: $Y(t,T)=-\frac{\log(P(t,T))}{T-t}$ formula.

So by inverting it we get bondprice:


We get instantaneous forward rate from partial derivate of $\log(P(t,T))$ by $T$ so the formula I use is:


where $T_0=0$.

My goal is to set up an observation matrix of instant. forward rates for volatility estimation in a model and I want to be sure if my pre-calculations are fine. Thanks for help in advanced.

by user7778 at April 19, 2014 12:29 PM


Scala: case class defaults values and equals/hashCode: Point ≠ Point(0,0)

why are these sets are all different?

case class Point(x:Int = 0, y:Int = 0)

Set(Point, Point)               // Set(Point)
Set(Point, Point(0,0))          // Set(Point, Point(0,0))
Set(Point(0,0), Point(x=0,y=0)) // Set(Point(0,0), Point(0,0))

set equality is false too.

i would think that even with defaults, equals and hashCode would depend on the values, not on the string or something.

by sam boosalis at April 19, 2014 12:25 PM

How to instantiate trait which extends class with constructor

Have a code:

class A(name:String)
trait E extends A

new E{}  //compile error

Is such inheritance possible? Tried to create val or def in the body of anonymous class, doesn't help.

by ka4eli at April 19, 2014 12:11 PM


Aktuelles Highlight der Propagandaschlacht zwischen ...

Aktuelles Highlight der Propagandaschlacht zwischen Russland und der Ukraine:
Der Mann, der sich nach der Besetzung des Polizeireviers in der ostukrainischen Stadt Gorlowka durch die Selbstverteidigung Mitte April als ein „Oberstleutnant der russischen Armee“ vorgestellt hat, ist ein ukrainischer Friedhofsdieb. Das meldete die ukrainische Agentur UNIAN am Mittwoch unter Berufung auf das Internetportal Ostrow.

April 19, 2014 12:01 PM


Review for database throttling trait based on play slick plugin

I implemented a throttled database service trait for wrapping my service code in a future, supplying a slick session and throttling the # of requests in accordance to the length of the thread pool queue.

My main reason for this was to transfer the responsibility for database session / transaction handling away from the controllers into the service layer of the application.

However, since I am fairly new to play and scala in general and I am working alone, I would really like some input regarding my code. Is it a good practice at all? Am I about to face any negative performance implications? Are there ways to optimize / refactor my code?

The code can also be found on Pastebin. Thanks you for your input!

import daos.exceptions.TabChordDBThrottleException
import java.util.concurrent.{LinkedBlockingQueue, TimeUnit, ThreadPoolExecutor}
import scala.concurrent._
import play.api.db.slick
import play.core.NamedThreadFactory

 * Implement this trait to get a throttled database session Wrapper
trait ThrottledDBService {
  /** Override to use a different Application */
  protected def app =

  /** Override to use different Databasename*/
  protected def dataBaseName = slick.Config.defaultName

  protected object DBConfiguration {
    private def buildThreadPoolExecutionContext(minConnections: Int, maxConnections: Int) = {
      val threadPoolExecutor = new ThreadPoolExecutor(minConnections, maxConnections,
        0L, TimeUnit.MILLISECONDS,
        new LinkedBlockingQueue[Runnable](),
        new NamedThreadFactory("tabchord.db.execution.context"))
      ExecutionContext.fromExecutorService(threadPoolExecutor) -> threadPoolExecutor

    val partitionCount = app.configuration.getInt(s"db.$dataBaseName.partitionCount").getOrElse(2)
    val maxConnections = app.configuration.getInt(s"db.$dataBaseName.maxConnectionsPerPartition").getOrElse(5)
    val minConnections = app.configuration.getInt(s"db.$dataBaseName.minConnectionsPerPartition").getOrElse(5)
    val maxQueriesPerRequest = app.configuration.getInt(s"db.$dataBaseName.maxQueriesPerRequest").getOrElse(20)
    val (executionContext, threadPool) = buildThreadPoolExecutionContext(minConnections, maxConnections)

  /** A predicate for checking our ability to service database requests is determined by ensuring that the request
      queue doesn't fill up beyond a certain threshold. For convenience we use the max number of connections * the max
      # of db requests per web request to determine this threshold. It is a rough check as we don't know how many
      queries we're going to make or what other threads are running in parallel etc. Nevertheless, the check is
      adequate in order to throttle the acceptance of requests to the size of the pool.
  protected def isDBAvailable: Boolean = {
    val dbc = DBConfiguration
    dbc.threadPool.getQueue.size() < (dbc.maxConnections * dbc.maxQueriesPerRequest)

   * Wraps the block with a Future in the appropriate database execution context and slick session,
   * throttling the # of request in accordance to the rules specified in the db config and.
   * Terminates the future with a @TabChordDBThrottleException in case of queue overload
   * @param body user code
   * @return Future of the database computation
  protected def throttled[A](body: slick.Session => A):Future[A] = {
        slick.DB(dataBaseName)(app).withSession {
          s =>
      } else {
        throw new TabChordDBThrottleException("Too many Requests pending in Threadpool")

   * Wraps the block with a Future in the appropriate database execution context and slick transaction,
   * throttling the # of request in accordance to the rules specified in the db config and.
   * Terminates the future with a @TabChordDBThrottleException in case of queue overload
   * @param body user code
   * @return Future of the database computation
  protected def throttledTransaction[A](body: slick.Session => A): Future[A] = {
    throttled {
      s => s.withTransaction {

by Floscher at April 19, 2014 11:59 AM


What are some of the active CS blogs which one should follow?

Currently, many top notch researchers (or their groups) maintain active blogs. These blogs keep us updated with latest research in their field of interest. In most of the cases, it is easy to understand the blog articles when compared to their corresponding papers, since they don't go in the gory details and also reason out their intuitions (which is generally missing in a paper).

Hence it would be useful to have a a list of such blogs. Please categories your answers and if possible mention a line or two highlighting the reason you like the particular blog.

by Bagaria at April 19, 2014 11:45 AM


Filter elements from two lists

Basically I have two lists something like

L1 = [one, two, three, five, eleven, million]
L2 = [five, million]

so I want to filter the elements from the second list L2

to get

[one, two, tree, eleven]

I have use the foldl function to loop the L1 and then a foreach loop to decide the element to append comparing from the list 2 but i do not seem to get a logic right: I have something like this

56 filter_a(L1,L2) ->
 57    List = foldl(fun(X,A) ->
 58                 L = lists:foreach(fun(E) ->
 59                             case E =:= X of
 60                                 true ->
 61                                     [];
 62                                 _-> 
 63                                     X
 64                             end 
 65                     end, L2),
 66                 lists:append([A,[L]])
 67         end, [], L1),
 68    List.

How can i do this in an easy way?

by user1000622 at April 19, 2014 11:40 AM

Incanter sample mean and variance not close to distribution mean and variance

I answered a question regarding generating samples with a positive support and known mean and variance using the gamma distribution in NumPy. I thought I'd try the same in Incanter. But unlike the results I got with NumPy, I could not get a sample mean and variance close to the distribution's mean and variance.

(defproject incanter-repl "0.1.0-SNAPSHOT"
  :description "FIXME: write description"
  :url ""
  :license {:name "Eclipse Public License"
            :url ""}
  :dependencies [[org.clojure/clojure "1.6.0"]
                 [incanter "1.5.4"]])

(require '[incanter 
           [distributions :refer [gamma-distribution mean variance draw]] 
           [stats :as stats]])

(def dist 
  (let [mean 0.71 
        variance 2.89 
        theta (/ variance mean) 
        k (/ mean theta) ] 
    (gamma-distribution k theta)))

Incanter calculates mean and variance of the distribution

(mean dist) ;=> 0.71
(variance dist) ;=> 2.89

I calculate a sample mean and variance based on draws from that distribution

(def samples (repeatedly 10000 #(draw dist)))

(stats/mean samples) ;=> 0.04595208774029654
(stats/variance samples) ;=> 0.01223348345651905

I expected these stats calculated on the sample to be much closer to the mean and variance of the distribution. What am I missing?


Incanter has a bug in the implementation of the mean and variance methods. Just like me (and despite its documentation and naming) it is treating the second parameter as a shape parameter in mean and variance.

(defrecord Gamma-rec [shape rate] ; using defrecord since cdf was not matching up in unittest without switching ":lower-tail"
  (pdf [d v] (.pdf (Gamma. shape rate (DoubleMersenneTwister.)) v))
  (cdf [d v] (Probability/gamma rate shape v)) ; TODO decide on :lower-tail
  (draw [d] (cern.jet.random.tdouble.Gamma/staticNextDouble shape rate))
  (support [d] [0,inf+])
  (mean [d] (* shape rate))
  (variance [d] (* shape rate rate))

These should be instead

  (mean [d] (/ shape rate))
  (variance [d] (/ shape rate rate))

by A. Webb at April 19, 2014 11:29 AM


Kuratowski's graph planarity criterion

There is a short proof of Kuratowski's graph planarity criterion. But I don't understand the proof, completely. So I hope someone may help me with that proof.

Here is the short proof of Kuratowski.

What I am struggling with is the first lemma.

I would appreciate if someone may help me.

by user2965601 at April 19, 2014 11:22 AM


secure social websockets play framework

How do I secure a websocket call in the play framework using scala securesocial?

def statusfeed() = WebSocket.using[String] { implicit request =>
    if (logged in) {
    else {


edit 1:

Tried this but didn't work, always get "not logged in"

def statusfeed() = WebSocket.using[String] { implicit request =>
  var in = Iteratee.ignore[String]
  var out = Enumerator.empty[String]
  session.get("userId").map { sessionId =>
      //sessionId is now userId in session
      //check he is authorise to view page or not   
      //  Ok("user in session")
      def getLoadAverage = {
          "%1.2f" format

      in = Iteratee.ignore[String]
      out = Enumerator.repeatM {
          Promise.timeout(getLoadAverage, 3 seconds)

  }.getOrElse {"not logged in")
      //any thing that you want when he is not in session
  (in, out)

edit 2: Replacing userId with SecureSocial.USER_KEY didn't work either

by Peter at April 19, 2014 10:53 AM


Matrix equality up to row/column permutations problem name

Sorry for the trivial question; has the following decision problem an "official" (possibly short) name?

Given two $n \times m$ $\text{0-1}$ (binary) matrices $M_1, M_2$ check if they are the same up to row and column permutations.

(something like the short names used in complexity theory for decision problems: e.g. 3SAT, GI (Graph Isomorphism), X3C (Exact Cover By Three Set), CLIQUE, ...)

by Vor at April 19, 2014 10:49 AM


Comparing Scala and Java Double.NaN

Why does this comparison evaluate to true?

scala> Double.NaN equals java.lang.Double.NaN
res5: Boolean = true

But this one evaluates to false?

scala> Double.NaN == java.lang.Double.NaN
res6: Boolean = false

]aside: this interesting Twitter thread prompted me to ask this question -]

by Kevin Meredith at April 19, 2014 10:45 AM

Datomic with friend authentication not working properly

I'm working on a clojure web application project for my college,and i am trying to connect datomic database with friend authentication but is kinda buggy...i will explain further...

First i am doing user registration(user insertion in datomic database) like this and it is working.

 (defn insert-user [firstname lastname email password sex date] (.get (.transact conn
                           :db/id #db/id[:db.part/user -1000001]
                           :user/name firstname
                           :user/lastName lastname
                           :user/username email
                           :user/password (creds/hash-bcrypt password)
                           :user/gender sex
                           :user/birthDate date}

(resp/redirect "/")

The routes handler and friend authenticator looks like this...function main is to start the app.

(def page (handler/site

          {:allow-anon? true
           :login-uri "/login"
           :default-landing-uri "/login"
           :unauthorized-handler #(-> (html5 [:h2 "You do not have sufficient privileges to access " (:uri %)])
                                    (resp/status 401))
           :credential-fn (partial creds/bcrypt-credential-fn users)
           :workflows [(workflows/interactive-form)]})

           (wrap-keyword-params routes)
           (wrap-nested-params routes)
           (wrap-params routes)


(defn -main []
(run-jetty page {:port 8080 :join? false}))

And for the end the datomic query for users to match with creds/bcrypt-credential-fn function of friend.

(defn upit-korisnici [] 
(def temp (d/q '[:find ?u ?p
         :where [?user :user/username ?u]
                [?user :user/password ?p]
        (d/db conn)))
(def users (into {} (map (fn [[k v]] [k {:username k :password v}]) temp)))

The thing that is bugging me and leaving me helpless is that when i register(insert user),the user is inserted in datomic database but when i try to log in i can't.It says wrong email and password but the new user is there.When i restart the whole app and try to login with the new users credentials it goes through and logs on.Does anyone know how to solve this problem?

by Shile at April 19, 2014 10:39 AM

Calculating Minimal Subset With Given Sum

I was doing a problem in Scala and this is the summary of the task statement:

There is a list of integers (of length N, 0 < N < 10^5) and another integer S (0 < S < 10^15). You are required to find the minimal size of the minimal subset of the given list of which the sum of elements (in the subset) is greater than or equal to S.

Input is given as below:
4 12 8 10
4 13 30 100

Output for above example:

First line is length of array, the second is the array of integers (0 < A[i] < 10^9), the third is the number of test cases (0 < T < 10^5) and the fourth contains the S (for each test case).

Here's what I tried: The elements selected do not matter. So I sorted the list of given integers largest first. Then I selected the first one checked if its greater than S. If not, I selected the second element also, and so on until the sum becomes greater than or equal to S.

This algorithm works and I got many test cases correct. But I'm getting Time Limit Exceeded for some. If you can point out how I could make this faster or if there's a better way to do this, it would be much appreciated.

My code (Scala):

object Solution {
  def main(args: Array[String]) {
    val n = readInt()
    val arr: Array[Long] = readLine().split(" ").map(_.toLong).sortWith(_ > _)

    val sums: Array[BigInt] = new Array(n)
    sums(0) = arr(0)
    for(i <- 1 until n) sums(i) = sums(i-1) + arr(i)

    val t = readInt()
    for(i <- 1 to t) {
      val s:BigInt = BigInt(readLong())
      if(sums(n-1) < s)

      else {
        var i = 0
        while(sums(i) < s) i += 1
        println(i + 1)

by Roshnal at April 19, 2014 10:37 AM

scala, slick : where can I find exceptions raised by a given method?

I wonder where I can find the exceptions raised by such a code:

def readFromDB: String = {
    db_sqlite_xml.withSession {
      implicit db: Session =>


I can't find it in the slick scaladoc (; I searched the method "first" in the javadoc's tableQuery class, but without any success.



by lolveley at April 19, 2014 10:23 AM


What are the good book to understand economics?

I am currently studying managerial economics by W. Bruce Allen. The book is good. Are there any good book with quizzes , that makes economics sweeter? Books that relate to real world facts and figure would be better.

by kinkajou at April 19, 2014 09:27 AM


problems with scallop and updating to scala 2.11

I tried to update to Scala 2.11.0-M5, but I've run into problems. I use scallop so I needed to build that with Scala 2.11.0-M5 because I could not find a prebuilt jar. The compile of scallop goes fine but when I try to run "sbt publish-local" I get the errors below when it is trying to build the documentation. To me this looks like it is trying to build some sbt source file. I tried to find newer source for sbt (or an sbt jar built with scala 2.11.0-M5), but could not. Can anyone offer any suggestions?

thanks very much!

[info] Generating Scala API documentation for main sources to /Users/jetson/develop/scala/scala-2.11/scallop/target/scala-2.11/api...
[info] Compiling 12 Scala sources to /Users/jetson/develop/scala/scala-2.11/scallop/target/scala-2.11/classes...
[info] 'compiler-interface' not yet compiled for Scala 2.11.0-M5. Compiling...
/var/folders/m9/fn_sw0s970q02nf8cng94j640000gn/T/sbt_1dff5778/CompilerInterface.scala:246: error: recursive method rootLoader needs result type
            override def rootLoader = if(resident) newPackageLoaderCompat(rootLoader)(compiler.classPath) else super.rootLoader
/var/folders/m9/fn_sw0s970q02nf8cng94j640000gn/T/sbt_1dff5778/CompilerInterface.scala:246: error: value rootLoader is not a member of
            override def rootLoader = if(resident) newPackageLoaderCompat(rootLoader)(compiler.classPath) else super.rootLoader
two errors found
[info] 'compiler-interface' not yet compiled for Scala 2.11.0-M5. Compiling...
/var/folders/m9/fn_sw0s970q02nf8cng94j640000gn/T/sbt_4baba5ae/CompilerInterface.scala:246: error: recursive method rootLoader needs result type
            override def rootLoader = if(resident) newPackageLoaderCompat(rootLoader)(compiler.classPath) else super.rootLoader
/var/folders/m9/fn_sw0s970q02nf8cng94j640000gn/T/sbt_4baba5ae/CompilerInterface.scala:246: error: value rootLoader is not a member of
            override def rootLoader = if(resident) newPackageLoaderCompat(rootLoader)(compiler.classPath) else super.rootLoader
two errors found
[error] (compile:doc) Error compiling sbt component 'compiler-interface'
[error] (compile:compile) Error compiling sbt component 'compiler-interface'
[error] Total time: 15 s, completed Oct 21, 2013 11:41:14 AM

by jetson at April 19, 2014 08:24 AM

how many threads do scala's parallel collections use by default?

When I call Array.tabulate(100)(i=>i).par map { _+ 1}, how many threads are being used?


by Walrus the Cat at April 19, 2014 08:21 AM

Convert java.util.IdentityHashMap to scala.immutable.Map

What is the simplest way to convert a java.util.IdentityHashMap[A,B] into a subtype of scala.immutable.Map[A,B]? I need to keep keys separate unless they are eq.

Here's what I've tried so far:

scala> case class Example()
scala> val m = new java.util.IdentityHashMap[Example, String]()
scala> m.put(Example(), "first!")
scala> m.put(Example(), "second!")
scala> m.asScala // got a mutable Scala equivalent OK
res14: scala.collection.mutable.Map[Example,String] = Map(Example() -> first!, Example() -> second!)
scala> m.asScala.toMap // doesn't work, since toMap() removes duplicate keys (testing with ==)
res15: scala.collection.immutable.Map[Example,String] = Map(Example() -> second!)

by tba at April 19, 2014 08:06 AM

After upgrading specs2 to 2.10 matcher 'haveTheSameElementsAs' seems lost

Recently I have upgraded my specs2 to 2.10-2.3.11 and apparently after that compiler can't recognize matcher must haveTheSameElementsAs. I couldn't find where they moved it or similar matcher in specs2. Any idea where is it/ what I should use instead of that one?

by user3519173 at April 19, 2014 07:50 AM


totally ordered multicast with Lamport timestamp

I'm studying Distributed Systems and synchronization and I didn't catch this solution of totally ordered multicast with Lamport timestamps. I read that it doesn't need ack to deliver a message to the application, but

"It is sufficient to multicast any other type of message, as long as that message has a timestamp larger than the received message. The condition for delivering a message m to the application, is that another message has been received from each other process with a large timestamp. This guarantees that there are no more messages underway with a lower timestamp."

This is a definition from a book. I tried to apply this definition to an example but I guess that something is wrong.


There are 4 processes and they multicast the following messages (second number in parentheses is timestamp) :
P1 multi-casts (m11, 5); (m12, 12); (m13, 14);
P2 multi-casts (m21, 6); (m22, 14);
P3 multi-casts (m31, 5); (m32, 7); (m33, 11);
P4 multi-casts (m41, 8); (m42, 15); (m43, 19).

Supposing that there are no acknoledgments, can I guess which messages can be delivered and which not? Based on definition, my guess is that only m11 and m31 can be delivered to the application, because all the other messages received will have a timestamp greater, but this seems very strange, and I think I didn't understand the delivery condition very well. I have an exam next week and in general I'd like to understand this mechanism.

by Fabrizio at April 19, 2014 07:37 AM


default value for functions in parameters in Scala

I was learning and experimenting with Scala. I wanted to implement a function with generic type, which takes a function as a parameter and provides a default implementation of that function..

Now when I try it without the generic type, it works :

def defaultParamFunc(z: Int, y: Int)(f: (Int, Int) => Int = (v1: Int,v2: Int) => { v1 + v2 }) : Int = {
  val ans = f(z,y)
  println("ans : " + ans)

this doesn't give any error

but when I try the same with generic type,

def defaultParamFunc[B](z: B, y: B)(f: (B, B) => B = (v1: B,v2: B) => { v1 + v2 }) = {
  val ans = f(z,y)
  println("ans : " + ans)

I get the error :

[error]  found   : B
[error]  required: String
[error]  def defaultParamFunc[B](z: B, y: B)(f: (B, B) => B = (v1: B,v2: B) => { v1 + v2 }) = {
[error]                                                                               ^

Is the error because the compiler doesn't know if B type will be addable ? because when I just return v1 or v2 instead of v1 + v2, it works..

def defaultParamFunc[B](z: B, y: B)(f: (B, B) => B = (v1: B,v2: B) => { v1 }) = {
  val ans = f(z,y)
  println("ans : " + ans)

If so, How to specify that the type given must be Numeric ? I tried replacing B with B : Numeric but still gives the same error

by Aditya Pawade at April 19, 2014 07:18 AM


Equational Logic and First Order Predicate Logic

I am interested in using Equational Theories (ET) together with Equational Logic (EL) found in algebraic specification languages such as CafeOBJ . I wish to use ET+EL to represent and prove sentences in First Order Predicate Logic (FOPL). The advantage of such an approach is that one can easily map loose theories written pseudo-FOPL to more concrete theories which may have initial models (using views). Translating FOPL to EL seems to require auxiliary techniques such as Skolemization and sentences splitting. I am concerned that there maybe some FOPL sentences which cannot be represented using EL even using these auxiliary techniques. I am aware that in general EL is regarded as a sub-logic of FOPL and any valid EL theorem is a valid FOPL theorem (but not vice versa). Goguen and Malcolm1 and Goguen and Malcolm2 describe FOPL as the background for equational proof scores in OBJ which was a predecessor of CafeOBJ/Maude. They also provide general advice on how use EL to prove FOPL theorems.
I am using an example from COQ: 1.4 predicate Calculus which I have written as two CafeOBJ theories or loose specifications. They are my two attempts to represent FOPL in EL. I am not sure if the approaches are valid.

Here is the description of the relation R from COQ.

Hypothesis R_symmetric : $\forall x y:D, R x y \implies R y x$.
Hypothesis R_transitive : $\forall x y z:D, R x y \implies R y z \implies R x z$.
Prove that R is reflexive in any point x which has an R successor.
For any x and y, R x y implies R y x by symmetry, then by transitivity, we have R x x.
Symmetry and transitivity are not enough to prove reflexivity, we must also assume the x is related to something (e.g R x ? or R ? x exists).

Consider the 2 equations labelled PROPERTY in the module TRANSITIVE1 and EQUATION in the module TRANSITIVE2.

My questions are these:
Does the PROPERTY equation and its reduction represent and prove reflexivity? The results of the reductions would appear to represent a proof.
Does the reduction of the EQUATION in TRANSITIVE2 prove reflexivity? I use a form selective application of bi-directional rewriting. This form of rewriting is highly controller by the user. Space does not permit a full description, but in this case the condition in the EQUATION is executed first and the assumption R x Y is applied during the rewriting process. We could describe this as a manual proof with some machine support.
How do these approaches differ? Form my web searches I get implicit and explicit as follows: The implicitly specification of function or relation asserts property that its value must satisfy. Implicit definitions take the form of a logical predicate over the input and result variables gives the result's properties. This approach seems to be distinct from the normal Peano style equational axioms (e.g. N + 0 = N). The PROPERTY approach seems to fit this description. Explicitly defined functions or relations are those where the definition can be used to to calculate an output from the arguments. The EQUATION approach seems to fit this description. Are these reasonable distinctions?

Regards, Pat
**> This is a loose module describing all models where the equation labelled PROPERTY holds. mod* TRANSITIVE1 { **> One sort or type called D. [ D ] **> A constant which ensures that transitivity holds op Y : -> D **> The symmetric property is asserted by CafeOBJ's commutativity property. op R__ : D D -> Bool {comm} op P : D D D -> Bool vars x y z : D **> Right associativity of implies eq [PROPERTY] : P(x,z,y) = (R x y) implies (R y z) implies (R x z) . } **> Normal rewriting open TRANSITIVE1 . red P(x,Y,x) . -- Gives true close **> A loose module describing all models where the equation labelled EQUATION holds. mod* TRANSITIVE2 { [ D ] op R__ : D D -> Bool {comm} op P : D D D -> Bool vars x y z : D

**> Replace COQ implies triplet with CafeOBJ conditional equation, using the following: **> [A -> B -> C] = [(A & B) -> C] = [C = TRUE if (A & B)] ceq [EQUATION] : R x z = true if ((R x y) and (R y z)) . **> Normal rewriting cannot deal with extra variable y in the condition on the RHS. **> Hence user controlled rewriting required using start/apply commands. } open TRANSITIVE2 . **> Using start/apply op X : -> D . **> If any variable x is related to arbitrary constant X eq [e1] : R x X = true . **> Then x is related to itself. **> CafeOBJ's start/apply commands allow selective bi-directional rewriting start R x x . apply .EQUATION with y = X at term . apply reduce at term . -- Result true : Bool close

by Pat at April 19, 2014 07:13 AM


What is the best piece of recursion code you have come across?

Cases where really complex logic was simplified because of recursion, cases where recursion is used in everyday programming

by jam at April 19, 2014 07:07 AM


Deterministic Multi-tape Turing Machine construction

I'm trying to construct a deterministic multi-tape turing machine for the following language in order to show that $L$ is in $DTIME(n)$:

$$L = \{ www \mid w \in \{a,b\}^+ \}$$

I'm not sure how to get started. Any hints would be appreciated.

by GeorgeCostanza at April 19, 2014 06:59 AM


clojure: with-redefs doesn't work with clojure.core functions?

I've a question about with-redefs. The following example doesn't work as expected. In findmax, clojure.core/max is always called instead of the anonymous function in the with-redefs statement.

(defn findmax [x y]
  (max x y))

(with-redefs (clojure.core/max (fn [x y] (- x y)))
  (findmax 2 5))

When I make the following changes everything works as expected:

(defn mymax [x y]
  (max x y))

(defn findmax [x y]
  (mymax x y))

(with-redefs (my/max (fn [x y] (- x y)))
  (findmax 2 5))

What am I doing wrong here?

by user3535953 at April 19, 2014 06:57 AM

Scala compilation error where it is not detecting the change in the index.html to refer to a new model Quote

I updated the Scala index view file according to this tutorial and got the following error: enter image description here

My code is as follows :


package controllers

import play.api._
import play.api.mvc._
import models.Quote

object Application extends Controller {

  def index = Action {
  Ok(views.html.index("Your new application is ready.",Quote("Citer les pensees des autres, c'est regretter de ne pas les avoir trouvees soi-meme.",
          "Sacha Guitry")))



package models
case class Quote(text: String, author: String)


@(message: String, quote: models.Quote)

@main("Welcome to Play 2.1") {

    <p>@quote.text<em> -</em></p>


Play is running in auto reloading mode in the background using the following command

~ run

I cannot understand why I am getting this compile error. I even tried doing an eclipse Build All.

by MindBrain at April 19, 2014 06:52 AM

Using Thread.sleep() inside an foreach in scala

I've a list of URLs inside a List.

I want to get the data by calling WS.url(currurl).get(). However, I want add a delay between each request. Can I add Thread.sleep() ? or is there another way of doing this?

 one.foreach {
    currurl => {

  println("using " + currurl)
  val p = WS.url(currurl).get()
  p.onComplete {
    case Success(s) => {
         //do something 
    case Failure(f) => {


by Soumya Simanta at April 19, 2014 06:24 AM


Proof that union of a regular and a not regular language is not regular

Let $L_1$ be regular, $L_1 \cap L_2$ regular, $L_2$ not regular. Show that $L_1 \cup L_2$ is not regular or give a counterexample.

I tried this: Look at $L_1 \backslash (L_2 \cap L_1)$. This one is regular. I can construct a finite automata for this ($L_1$ is regular, $L_2 \cap L_1$ is regular, so remove all the paths (finite amount) for $L_1 \cap L_2$ from the finite amount of paths for $L_1$. So there is a finite amount of paths left for this whole thing. This thing is disjoint from $L_2$, but how can I prove that the union of $L_1 \backslash (L_1 \cap L_2)$ (regular) and $L_2$ (not regular) is not regular?

by Kevin at April 19, 2014 06:24 AM


compojure in production

Hello there

I have been building REST APIs in in compojure for a while now and have really enjoyed it. I have a project about to go into production and I just wanted to have a brief discussion about production best practices. For other projects, I have ssh'd into the server and just ran lein ring server-headless in a tmux session and left it at that. I'm sure there is a better way than this to have compojure run in production.

Any thoughts?

submitted by tgallant
[link] [6 comments]

April 19, 2014 05:13 AM


Large array indexes scala

(Language: scala)

I have a problem where I want to iterate over 1 million numbers, but for some reason I get an arrayindexOutofbounds exception. The function I am using works perfectly for 100 000 numbers, but I get the exception if I add a zero.

There cannot be a problem with the array size, because I have built a sort of flex-array, where the array is about 1000 elements and each element consists of a list of elements.

So the problem looks something like this:

for (x <- 1 to 1000000) {
  // Do a thing

Can for loops only handle a certain number of elements?

I have tried running the program with the "extra-space-flag"

I include the whole code below for reference, in case it makes a difference

object Problem14 {

  class FlexArray (n : Int){
    var array = new Array[List[Tuple2[Int, Int]]](n)
    val size = n

    for(x <- 0 until size) {
      array(x) = List()

    def insert (n : Int, value : Int) {
      if (find(n) != -1) {
     val i = n % size
     array(i) = (n, value) :: array(i)

    def read (i : Int) : List[Tuple2[Int, Int]] = {

    def findAux (list : List[Tuple2[Int, Int]], n : Int) : Int = {
      if (list == Nil) {
      } else {
    val (num, value) = list.head
    if (n == num) {
    } else {
      findAux(list.tail, n)

    def find (n : Int) : Int = {
      val i = n % size
      findAux(array(i), n)

  var accArray = new FlexArray(10000)

// denna funktion bör kallas med 1 som andra argument
  def chainLength (n : Int, acc : Int) : Int = {
    if (n == 1) 
    else {
      val value = accArray.find(n)
    if (value != -1) 
  acc + value 
    else if (n % 2 == 0)
  chainLength(n/2, acc+1)
  chainLength(3*n+1, acc+1)      

 def main(args: Array[String]) {
    var max = 0
    var maxnum = 0

    for (x <- 1 to 1000000) {
      var value = chainLength(x, 1)
      accArray.insert(x, value)
      if (max < value) { 
    max = value
    maxnum = x

    println(maxnum + ": " + max)



by user3550338 at April 19, 2014 04:28 AM


"Equivalent" data sets despite different numbers

Are the historical data sets of short term treasury bill rates considered the same as the historical data sets of savings account interest rates because by definition they are both risk free rates of returns?

by Aspiring Quant at April 19, 2014 04:20 AM

Planet Theory

This week in history


The great earthquake of 1906 struck San Francisco on April 18, around 5 in the morning. While the earthquake already caused a lot of damage, it was the subsequent fire that ravaged the city: the earthquake had broken the water pipes, and so it was impossible to fight the fire because the hydrants were not working. Except for the hydrant at Church and 20th, which saved my house and a good part of the mission. The hydrant is painted golden, and once a year, on the anniversary of the earthquake, the fire department repaints it and leaves a token of appreciation. (They actually do it at 5 in the morning.)

By the way, there are two faults that can cause earthquakes in the San Francisco Bay Area. One is (our stretch of) the San Andreas fault, which runs close to the ocean, and which caused the 1906 quake and the 1989 one, and which may not be an imminent risk given the energy released in 1989. The other is the Hayward fault, which runs near Berkeley. The Hayward fault had big earthquakes in 1315, 1470, 1630, 1725, and 1868, that is about every 100-140 years, with the last one being 146 years ago…


25 years ago on April 15, Hu Yaobang died. The day before his funeral, about 100,000 people marched to Tiananmen square, an event that led to the occupation of the square, and which culminated in what in mainland China used to be referred to as the “June 4 events,” and now as the “I don’t know what you are talking about” events.

Also, something happened, according to tradition, 1981 years ago.

by luca at April 19, 2014 04:10 AM


How to use installed highlight-symbol instead of the built-in one?

Before 24.4, I used this package: and use "highlight-symbol-at-point" much.

In 24.4, there is a built-in highlight-symbol-at-point from "hi-lock.el.gz". But the built-in one keeps highlights even if already highlighted, so I prefered the installed package.

Question is: how can I use the installed one instead of the built-in one?

FYI, following is my config:

(require 'highlight-symbol) (global-set-key [f2] 'highnlight-symbol-next) (global-set-key [(shift f2)] 'highlight-symbol-prev) (global-set-key [?\s-.] 'highlight-symbol-at-point) (global-set-key (kbd "H-,") 'highlight-symbol-query-replace)

submitted by goofansu
[link] [comment]

April 19, 2014 03:56 AM


what's the elegant way to write this code in clojure?

i use clojure and korma libs.

defn db-search-users
  [& {:keys [nick_name max_age min_age page page_size lmt oft]
      :or {lmt 10  page_size 10 oft 0 }
      :as conditons}]
  (let [users-sql  (-> (select* users)
                       (fields :user_name :id :nick_name)
                       (limit (if (nil? page) lmt page_size))
                       (offset (if (nil? page) oft (* page page_size))))]
       (exec (-> users-sql


now i need to add some search conditions to users-sql at "need_do_something_here",i can describe it in imperative style:

if ( nick_name != nil)
    users-sql = (where users-sql (like :nick_name nick_name)

if (max_age != nil)
    users-sql = (where users-sql (> :birthday blabla....))

if (min_age != nil)
    users-sql = (where users-sql (< :birthday blabla....))

how to do this in an elegant way with functional style?

another question is: I think code like:

(if (nil? page) lmt page)

is ugly. is there some functions in clojure like (get_default_value_3_if_a_is_null a 3) ?

by user2219372 at April 19, 2014 03:31 AM


Multiple day forecasting volatility using GARCH(1,1)

I've been struggling with the volatility forecasting for a while. After digging in the internet, I've came up with a quasi solution. However, the result doesn't make sense to me. I want to forecast multiple days volatility in future. The sigma I got increases overtime for n.ahead=50. I want to see the volatility in 50 days in the future. But it can't be always increasing.

How should I do this correctly? Any tips will be appreciated. Thank you in advance.


data<-getSymbols("SPY", from="2000-01-01", to="2013-12-31")

model<-ugarchspec(variance.model = list(model = "sGARCH", garchOrder = c(1, 1)), mean.model = list(armaOrder = c(0, 0), include.mean = FALSE), distribution.model = "norm")

data = mydata[1:3521, ,drop=FALSE]
spec = getspec(modelfit)
setfixed(spec) <- as.list(coef(modelfit))
forecast = ugarchforecast(spec, n.ahead = 50, n.roll = 3520, data = mydata[1:3521, ,drop=FALSE], out.sample = 3520)


by lulumink at April 19, 2014 02:57 AM


In cache addressing, what value is placed in the offset field?

There is a 64 KB 1-word cache, and a word is 32 bits. From that I can derive that the length of the tag field is 16 bits, the length of index field is 14 bits, and, as my professor taught me, there is always 2 bits left behind for a byte offset.

Why the offset field is 2 bits, other than it fills in the remaining 2 bits of the word, and what its contents is was never covered in the course.

But when I looked around on Google, I read, and correct me if I am wrong, that the length of the offset field can vary. Although I have found answers on how to determine the length, I could not find anything about determining its contents when a read hit/miss is performed. My professor merely said "the byte offset is not used to select the word in the cache".

Just looking for clarification.

by sam at April 19, 2014 02:50 AM



Extract all tokens from string using regex in Scala

I have a string like "httpx://__URL__/__STUFF__?param=value" This sample is a url by could be anything with zero or more __X__ tokens in it.

I want to use a regex to extract a list of all the tokens, so output here would be List("__URL__","__STUFF__"). Remember, I don't know beforehand how many (if any) tokens may be in the input string.

I've been struggling but unable to come up with a regex expression that will do the trick.

Something like this did not work:


by Greg at April 19, 2014 02:01 AM

Less Wrong

New LW Meetup: Christchurch NZ

Submitted by FrankAdamek • 1 votes • 0 comments

New meetups (or meetups with a hiatus of more than a year) are happening in:

Irregularly scheduled Less Wrong meetups are taking place in:

The remaining meetups take place in cities with regular scheduling, but involve a change in time or location, special meeting content, or simply a helpful reminder about the meetup:

Locations with regularly scheduled meetups: Austin, Berkeley, Berlin, Boston, Brussels, Cambridge UK, Canberra, Columbus, London, Madison WI, Melbourne, Mountain View, New York, Philadelphia, Research Triangle NC, Salt Lake City, Seattle, Toronto, Vienna, Washington DC, Waterloo, and West Los Angeles. There's also a 24/7 online study hall for coworking LWers.

If you'd like to talk with other LW-ers face to face, and there is no meetup in your area, consider starting your own meetup; it's easy (more resources here). Check one out, stretch your rationality skills, build community, and have fun!

In addition to the handy sidebar of upcoming meetups, a meetup overview is posted on the front page every Friday. These are an attempt to collect information on all the meetups happening in upcoming weeks. The best way to get your meetup featured is still to use the Add New Meetup feature, but you'll also have the benefit of having your meetup mentioned in a weekly overview. These overview posts are moved to the discussion section when the new post goes up.

Please note that for your meetup to appear in the weekly meetups feature, you need to post your meetup before the Friday before your meetup!

If you missed the deadline and wish to have your meetup featured, you can reach me on gmail at frank dot c dot adamek.

If you check Less Wrong irregularly, consider subscribing to one or more city-specific mailing list in order to be notified when an irregular meetup is happening: Atlanta, Chicago, Cincinnati, Cleveland, Frankfurt, Helsinki, Marin CA, Ottawa, Pittsburgh, Portland, Southern California (Los Angeles/Orange County area)St. Louis, Vancouver.

Whether or not there's currently a meetup in your area, you can sign up to be notified automatically of any future meetups. And if you're not interested in notifications you can still enter your approximate location, which will let meetup-starting heroes know that there's an interested LW population in their city!

If your meetup has a mailing list that you'd like mentioned here, or has become regular and isn't listed as such, let me know!

Want to help out the common good? If one of the meetups listed as regular has become inactive, let me know so we can present more accurate information to newcomers.


April 19, 2014 01:57 AM


Computer Architecture: control pins, CE OE

Just understanding some syntax. On my Ram (6116) and Rom (27C64) it has a asserted low CE and OE pins. These I believe are control pins.

I'm assuming to use the RAM for example, chip enable (ce) has to be low. then Output enable has to be low for data to be sent/read to cpu?

If im right so far by looking at the 6116 ram the data bus pins can be either input/output. So CE or OE don't determine whether data is being read or written to that address location. Theres another pin called WE which im assuming is a control pin for data to be input/output.

What does WE stand for and am i right with what I've assumed?

by Bobski at April 19, 2014 01:21 AM


Upper bound on Euler characteristic of downward closed family

(Definition: $\mathcal{F}$ is called downward closed if for any $A \in \mathcal{F}$ and $B \subseteq A$ it holds that $B \in \mathcal{F}.$)

Let $\mathcal{F}$ be a downward closed family of subsets of $\{1, ..., n\}$ generated by $m$ sets. Let $\chi(\mathcal{F})$ := number of odd cardinality members of $\mathcal{F}$ minus the number of even cardinality members of $\mathcal{F}.$ Prove or disprove: $|\chi(\mathcal{F})| \leq m^{O(\log n)}.$

MathOverflow Link:

As a side question, I am also curious about how easy/hard it is to compute/approximate $|\chi(\mathcal{F})|$ given $n$ and the $m$ generators of $\mathcal{F}.$

by Raghav Kulkarni at April 19, 2014 01:18 AM


What are your top 3 challenges working with clients on large system projects?

I hope you don't mind a visit from an outsider. I heard this forum was a good way to reach computer scientists and developers.

We often work with developers or contract out development work for large system projects (CRM, ERP, WMS, etc.). I would like to improve our interactions with developers, and make things a little bit easier for the ones we work with.

Would you mind sharing?:

  • your top 3 challenges working with your clients, along with a brief description
  • how you have solved them (if applicable)

I know it might not be the same for everyone, but any help you can provide would be sincerely appreciated!

submitted by dma38
[link] [1 comment]

April 19, 2014 01:01 AM

Portland Pattern Repository


Scalatra could not find or load main class

I have hello world scalatra application. I added scalatra-sbt plugin and:

val myDistSettings = DistPlugin.distSettings ++ Seq(
    mainClass in Dist := Some("WebServerLauncher"),
    memSetting in Dist := "2g",
    permGenSetting in Dist := "256m",
    envExports in Dist := Seq("LC_CTYPE=en_US.UTF-8", "LC_ALL=en_US.utf-8"),
    javaOptions in Dist ++= Seq("-Xss4m",

After making sbt dist it generates .zip with:

#!/bin/env bash

export CLASSPATH="lib:lib/logback-core-1.0.6.jar:lib/jetty-webapp-8.1.8.v20121106.jar:lib/jetty-io-8.1.8.v20121106.jar:lib/scalatra-scalate_2.10-2.2.2.jar:lib/jetty-server-8.1.8.v20121106.jar:lib/mime-util-2.1.3.jar:lib/scalatra-common_2.10-2.2.2.jar:lib/scalate-core_2.10-1.6.1.jar:lib/jetty-util-8.1.8.v20121106.jar:lib/jetty-servlet-8.1.8.v20121106.jar:lib/joda-convert-1.2.jar:lib/juniversalchardet-1.0.3.jar:lib/slf4j-api-1.7.5.jar:lib/scala-library-2.10.4.jar:lib/jetty-continuation-8.1.8.v20121106.jar:lib/grizzled-slf4j_2.10-1.0.1.jar:lib/config-1.0.0.jar:lib/javax.servlet-3.0.0.v201112011016.jar:lib/jetty-xml-8.1.8.v20121106.jar:lib/rl_2.10-0.4.4.jar:lib/jetty-security-8.1.8.v20121106.jar:lib/akka-actor_2.10-2.1.2.jar:lib/jetty-http-8.1.8.v20121106.jar:lib/scala-reflect-2.10.0.jar:lib/scalate-util_2.10-1.6.1.jar:lib/logback-classic-1.0.6.jar:lib/scalatra_2.10-2.2.2.jar:lib/joda-time-2.2.jar:lib/scala-compiler-2.10.0.jar:"
export JAVA_OPTS="-Xms2g -Xmx2g -XX:PermSize=256m -XX:MaxPermSize=256m -Xss4m -Dfile.encoding=UTF-8 -Dorg.scalatra.environment=production"
export LC_CTYPE=en_US.UTF-8
export LC_ALL=en_US.utf-8

java $JAVA_OPTS -cp $CLASSPATH WebServerLauncher

When i'm trying to run it i got:

Error: Could not find or load main class WebServerLauncher

There is WebServerLauncher.class in lib directory.

How to correctly launch it?

Thank you.

by 0xAX at April 19, 2014 12:52 AM

"dist" command gets "not a valid command" error

I have a working Play Framework 2.1 application generated with typesafe activator that I've developed in Scala. I'm trying to deploy it in CloudBees using the instructions that can be found here: using the method described under "Using Cloudbees SDK."

However, when I load up the play console and try to run the "dist" command, I get the error "Not a valid command: dist."

I've tried two run this three different ways:

  1. In the terminal window (I'm using Mac OS X), I navigated to the project directory, ran the "activator" application (there is no application in that directory called "play", but "activator" seems to be the), then from the prompt that appears I enter the command "dist."
  2. I downloaded the regular (non-activator) Play Framework distirbution file, add the directory to my path using "export PATH=$PATH:/Applications/play-2.2.2", navigated to the project directory, and ran the command "play dist."
  3. Installed play using Homebrew. Navigated to the project directory and ran "play dist".

All three methods give me the same error (see below). Is the method different for my version of play? Am I missing something from the sbt file? How can I get this working?

Full output for "play dist":

Macmini-##########-#:nimrandslibrary.searchfu.esl kpyancey$ play dist
[info] Loading project definition from /Users/kpyancey/Projects/NimrandsLibrary.SearchFu.Esl/project
[info] Set current project to NimrandsLibrary.SearchFu.Esl (in build file:/Users/kpyancey/Projects/NimrandsLibrary.SearchFu.Esl/)
[error] Not a valid command: dist (similar: set, iflast, last)
[error] Not a valid project ID: dist
[error] Expected ':' (if selecting a configuration)
[error] Not a valid key: dist (similar: test, ivy-sbt, history)
[error] dist
[error]     ^

by Nimrand at April 19, 2014 12:46 AM


Expected maximum bin load, for balls in bins with equal number of balls and bins

Suppose we have $n$ balls and $n$ bins. We put the balls into the bins randomly. If we count the maximum number of balls in any bin, the expected value of this is $\Theta(\ln n/\ln\ln n)$. How can we derive this fact? Are Chernoff bounds helpful?

by user3367692 at April 19, 2014 12:38 AM

Planet FreeBSD

Weekly Feature Digest 26 — The Lumina Project and preload

This week the PC-BSD team has ported over preload, which is an adaptive readahead daemon. It monitors applications that users run, and by analyzing this data, predicts what applications users might run, and fetches those applications and their dependencies to speed up program load times. You can look for preload in the next few days in edge packages and grab it for testing on your own system.

There is an early alpha version of the Lumina desktop environment that has been committed to ports / packages. Lumina is a lightweight, stable, fast-running desktop environment that has been developed by Ken Moore specifically for PC-BSD. Currently it builds and runs, but lacks many other features as it is still in very early development. Grab it from the edge packageset and let us know what you think, and how we can also improve it to better suit you as a user!

Other updates this week:

* Fixed some bugs in ZFS replication causing snapshot operations to take
far longer than necessary
* Fixed an issue with dconf creating files with incorrect permissions
causing browsers to fail
* Added Lumina desktop ports / packages to our build system
* PC-BSD Hindi translation 100% complete
* improvements to the update center app
* Update PCDM so that it will use “pw” to create a user’s home directory if it is missing but the login credentials were valid. This should solve one of the last reported issues with PCDM and Active Directory users.
* Bugfix for pc-mounttray so that it properly ignores the active FreeBSD swap partition as well.
* Another small batch of 10.x PBI updates/approvals.

by Josh Smith at April 19, 2014 12:22 AM


China hat eine peinliche Statistik veröffentlicht: ...

China hat eine peinliche Statistik veröffentlicht: 16,1% der Erde sind verschmutzt, und sogar 19,4% der Anbaufläche. Natürlich ist davon nicht alles gleich übel verschmutzt, aber angesichts der Größe Chinas ist das eine ziemlich erschütternde Statistik.

April 19, 2014 12:01 AM

HN Daily

April 18, 2014



What are the benefits / drawbacks of functional object creation in JavaScript?

I just watched Douglas Crockford talk about how prototypical inheritance is "not a good idea either"

YouTube 35m55s

I don't really care about his views on Prototypical inheritance in conjunction with JavaScript since it is such an essential part of the language that it will always be there.

But I would like to know what benefits I am reaping by using the functional object creation that he is showing in the link:

// Class Free Object Oriented Programming
function constructior(init) {
    var that = other_constructor(init),
        method = function () {
            // init, member, method
    that.method = method;
    return that;

After the video I re-read the part about Functional Object Creation in his book "JavaScript The Good Parts" Chapter 5: Inheritance.

But I can't really see the big difference.. I can get private members just fine with the constructor pattern:

function Constructor (value) {
    var private = value;
    this.getPrivate = function () {
        return private;
var OBJ1 = new Constructor(5);
var OBJ2 = new Constructor('bacon');

console.log( OBJ1.getPrivate() ); // 5
console.log( OBJ2.getPrivate() ); // bacon

The only difference I can spot between a Constructor Pattern and the Functional Pattern is the omission of the new keyword. By avoiding the use of the new keyword we can avoid the error of forgetting the new keyword.

Writing this:

var panda = createBear();

Instead of this:

var panda = new Bear();

Makes me think it is mainly down to personal preference. I can see how avoiding the new keyword can be useful, and I might adopt it the functional pattern. But this is the only reason I can see as to why you would do it. Can I please get some more information why one would be better or worse than the other?

by Sauer_Kraut at April 18, 2014 11:54 PM

What is zip (functional programming?)

I recently saw some Clojure or Scala (sorry I'm not familiar with them) and they did zip on a list or something like that. What is zip and where did it come from ?

by Robert Gould at April 18, 2014 11:40 PM



How to use java project in eclipse from clojure project

I have an existing Java code base. It is organized into several projects in eclipse. These projects tend to require one another. For example:

 Project A -> Common Lib 1 -> 2nd level dependency 1
           -> Common Lib 2

To utilize code from other projects I can go to "Build Path" "Projects" tab and click "Add"

Is there something similar that can be done for clojure code (in eclipse), so that I can easily start using code from my existing Java projects in clojure?

by Carlos Rendon at April 18, 2014 10:55 PM

Fluentd to Logstach output plugin

I am trying to read from the scribe server using flunetd and output those logs to be stored in logstash for now. I know it's very stupid to log the scribe_central logs to another central logger, but we need this to be done in our current architecture.

Does anyone know if there is any plugin to do that? I searched Google but could not find any.

by user3195649 at April 18, 2014 10:40 PM



Can someone ELI5 the closest pair - sieve method?

Can someone explain to me how this works?

These are my teacher's notes on the topic, but I just can't seem to make sense of it:

So instead, I searched for the paper mention on the second slide and found:

But I'm still having a hard time wrapping my head around what my teacher's talking about.

submitted by fizzix_
[link] [comment]

April 18, 2014 10:07 PM


Portland Pattern Repository

Planet Theory

TR14-057 | Measure of Non-pseudorandomness and Deterministic Extraction of Pseudorandomness | Diptarka Chakraborty, Manindra Agrawal, Debarati Das, Satyadev Nandakumar

In this paper, we propose a quantification of distributions on a set of strings, in terms of how close to pseudorandom the distribution is. The quantification is an adaptation of the theory of dimension of sets of infinite sequences first introduced by Lutz \cite{Lutz:DISS}. We show that this definition is robust, by considering an alternate, equivalent quantification. It is known that pseudorandomness can be characterized in terms of predictors \cite{Yao82a}. Adapting Hitchcock \cite{Hitchcock:FDLLU}, we show that the log-loss function incurred by a predictor on a distribution is quantitatively equivalent to the notion of dimension we define. We show that every distribution on a set of strings of length $n$ has a dimension $s\in[0,1]$, and for every $s \in [0,1]$ there is a distribution with dimension $s$. We study some natural properties of our notion of dimension. Further, we propose an application of our quantification to the following problem. If we know that the dimension of a distribution on the set of $n$-length strings is $s \in [0,1]$, can we deterministically extract out $sn$ \emph{pseudorandom} bits out of the distribution? We show that this is possible in a special case - a notion analogous to the bit-fixing sources introduced by Chor \emph{et. al.} \cite{CGHFRS85}, which we term a \emph{nonpseudorandom bit-fixing source}. We adapt the techniques of Kamp and Zuckerman \cite{KZ03} and Gabizon, Raz and Shaltiel \cite{GRS05} to establish that in the case of a non-pseudorandom bit-fixing source, we can deterministically extract the pseudorandom part of the source. Further, we show that the existence of optimal nonpseudorandom generator is enough to show ${\P}={\BPP}$.

April 18, 2014 09:49 PM

TR14-056 | Factors of Sparse Polynomials are Sparse | Rafael Mendes de Oliveira, Zeev Dvir

We show that if $f(x_1,\ldots,x_n)$ is a polynomial with $s$ monomials and $g(x_1,\ldots,x_n)$ divides $f$ then $g$ has at most $\max(s^{O(\log s \log\log s)},d^{O(\log d)})$ monomials, where $d$ is a bound on the individual degrees of $f$. This answers a question of von zur Gathen and Kaltofen (JCSS 1985) who asked whether a quasi-polynomial bound holds in this case. Two immediate applications are a randomized quasi-polynomial time factoring algorithm for sparse polynomials and a deterministic quasi-polynomial time algorithm for sparse divisibility.

April 18, 2014 09:47 PM


selecting test data for neural networks

I have been working on a neural network based on certain technical indicators. As people familiar with neural networks would know after developing a hypothesis, the developer is also supposed to provide a set of data to learn from. Now if were a case of developing neural networks for spam fitering I would provided it with sets of spam and non spam data. But in my case how do I select the buy/sell I just randomly select the entry points where can visually see the movement in price that I desire or is there a better approach?

by user6762 at April 18, 2014 09:44 PM

PCA related Query

I am currently working on a project in grad school where I am using PCA Approach.

I have 4 stocks. I used R to generate Eigen Values, Eigen vectors Eigen Values

Number Value Diff Proportion 1.00 3.51300808 3.18720008 0.8782 2.00 0.325808 0.17528152 0.08145 3.00 0.15052648 0.13986904 0.03763 4.00 0.01065744 0.00266

Eigen Vector
pc(1) pc(2) pc(3) pc(4) Stock 1 0.516215 0.4136083 -0.1234068 -0.7397439 Stock 2 0.5131561 0.31805179 -0.5016014 0.6196046 Stock 3 0.5048276 0.03720224 0.8298851 0.2346397 Stock 4 0.4640495 -0.85228354 -0.2108496 -0.1175298

How to arrive at the Eigen portfolio's. In my paper it says I can arrive at the weights by dividing the eigen vector by the Standard deviation of stocks & then multiply them by Stock returns. For ex, The stdev of stock 1 daily data is 0.0375. When I divide Factor loadings of Stock1, PC(1) I get a 98 something, which cannot be the weight of that stock in the eigen portfolio. Also Can you pls tell me how to interpret the eigen portfolio if i use PC(1) loadings & PC(2) loadings. I am really confused and stuck up. Luckily I came across this site. Any help is appreciated.

by user7848 at April 18, 2014 09:37 PM

Planet Theory

danah boyd, Randall Munro, and netizens.

danah boyd, author of 'It's Complicated' just gave a tech talk at Google. Her book has been in the news a lot lately, so I'll skip the details (although Facebook ought to be at least slightly worried).

But what I enjoyed the most about her talk was the feeling that I was listening to a true netizen: someone who lives and breathes on the internet, understands (and has helped build) modern technology extremely well (she is a computer scientist as well as an ethnographer), and is able to deliver a subtle and nuanced perspective on the role and use of technology amidst all the technobabble (I'm looking at you, BIG data) that inundates us.

And she delivers a message that's original and "nontrivial". Both about how teens use and interact with social media, and about how we as a society process technological trends and put them in context of our lives. Her discussion of context collapse was enlightening: apart from explaining why weddings are such fraught experiences (better with alcohol!) it helped me understand incidences of cognitive frisson in my own interactions.

What she shares with Randall Munro in my mind is the ability to speak unselfconsciously and natively in a way that rings true for those of us who inhabit the world of tech, and yet articulate things that we might have felt, but are unable to put into words ourselves. Of course they're wildly different in so many other ways, but in this respect they are like ambassadors of the new world we live in.

by Suresh Venkatasubramanian ( at April 18, 2014 09:36 PM


"Or"-ing two Options in Scala?

I want to do something like this:

def or[A](x: Option[A], y: Option[A]) = x match {
 case None => y   
 case _ => x 

What is the idiomatic way to do this? The best I can come up with was Seq(x, y).flatten.headOption

by wrick at April 18, 2014 09:18 PM


Small-step semantics: for-loop

I'm trying to construct the small-step semantic rules involving the for-loops, but I can't find anything about it in the literature (only about while-loops).

I was wondering if anyone could help me out with this?

$\quad \displaystyle\sigma, \text{for } s_1 \, e_1 \, e_2 \, s_2 \, \rightarrow \, \sigma, \text{if } e_1 \text{ then (} s_2 ; \, e_2; \, \text{for } s_1 \, e_1 \, e_2 \, s_2 \text{ ) else } skip$

Where $\sigma$ is a local value store, $s_1$ is for example $i = 0$, $e_1$ could equal $i < 4$ and $e_2$ $i=i+1$.

by ABC at April 18, 2014 09:08 PM

Planet Clojure

Learn Clojure…

Learn Clojure – Clojure Koans Walkthrough in Light Table IDE

You have heard of Clojure and no doubt the Clojure Koans.

Now there are videos solving the Clojure Koans using the Light Table IDE.

I first saw this at Clojure Koans by Christopher Bare.

by Patrick Durusau at April 18, 2014 09:01 PM


FX Rate dynamics

Let's suppose USD/EUR price in USD follows a GBM with $$ dS_t = rS_tdt + \sigma S_tdW_t $$ What process does EUR/USD follow in EUR?

by neticin at April 18, 2014 08:59 PM

Risk neutral measure for jump processes

How can I construct risk neutral measure for option price if active price form is:

$$S(t)=S(0)\left[\exp{σW(t)+(α-βλ-1/2σ^2)t+Q(t)}\right] ?$$

Here $W(t)$ is a Brownian motion and $Q(t)$ is a compound Poisson process.

Thank you beforehand.

by user7843 at April 18, 2014 08:56 PM


FreeBSD on an SSHD drive

Dear /r/freebsd,

I'm preparing to buy a new laptop in the near future (probably a Latitude from Dell). I've read different opinions about these SSHD drives - some say they could mean trouble, some say they just work. Does anybody here happen to use such storage device? Should I choose it or stick with something traditional (HDD)?

submitted by pfm
[link] [8 comments]

April 18, 2014 08:54 PM


Scala: "number" interpolation

Scala has string interpolation like raw"\n" for raw strings.

Does it have anything like number interpolation e.g. 1px for one pixel? A nice syntax for numeric units would both make code more readable and make it easier to write safer code.

Like strings, numbers have a nice literal syntax and are fundamental.

prefix notation px(1) is not how people write units:

case class px(n: Int)

And I don't think a postfix notation via implicit conversion can work:

case class Pixels(n: Int) {
 def px() = Pixels(n)
 def +(p: Pixels) = p match {case Pixels(m) => Pixels(n+m)}
implicit def Int2Pixels(n: Int) = Pixels(n)
  1. it needs a dot or space or parens (i.e. not (1 px) or (1)px or 1.px, which is not how humans write units).

  2. it won't check types i.e. we want to explicitly cast between these numeric type-alias things and numbers themselves (i.e. 1.px + 2 and 1 + 2.px and def inc(p: Pixels) = p + Pixels(1) with inc(0) all don't fail, because of the implicit cast, when they should).

by sam boosalis at April 18, 2014 08:51 PM


Install r5u87x on FreeBSD

I need to install Ricoh r5u87x webcam loader on FreeBSD, because it solve not suspending in unix/linux boxes on my vaio laptop as I try it in UBUNTU, but the loader is simply for Linux, I think. Is there any solution for FreeBSD? Thanks.

by hesam at April 18, 2014 08:40 PM


FileNotFoundException Could not locate clojure/java/jdbc__init.class

I have a problem with importing jars in clojure. I used lein to add dependencies. This is code from project.clj

(defproject recommendation "0.1.0-SNAPSHOT"
:description "FIXME: write description"
:url ""
     :license {:name "Eclipse Public License"
      :url ""}
      :dependencies [[org.clojure/clojure "1.5.1"]
                     [org.clojure/java.jdbc "0.0.6"]         ;; jdbc 
                     [mysql/mysql-connector-java "5.1.6"]]
  :aot :all
  :main recommendation.core)

I typed in cmd lein deps, and it has downloaded for me 3 jars in lib folder.

This is code from recommendation.core

(ns recommendation.core
(:require [ :as sql]) )

And I get exception:

FileNotFoundException Could not locate clojure/java/jdbc__init.class or clojure/java/jdbc.clj on classpath:   clojure.lang.RT.load (

Can anybody tell me where i am wrong and what to do?

EDIT: I solved the problem by restarting a REPL. There was a problem with :aot :alltoo, i couldnt restart aplication, eclipse was at not responding mode when i run a repl again.

Thanks anyway.

by user3549602 at April 18, 2014 08:38 PM


(College Student) Were can I find Historical Interest Rate Data?

Where can I find American historical Savings Account interest (Bank) rates? If you can, please attach corresponding links.

by Aspiring Quant at April 18, 2014 08:35 PM


A variant of Travelling Salesman: Is it NP-complete if its sub-problems are NP-complete? [on hold]

Suppose there is a travelling salesman who wants to travel through N cities in k countries(k <= N). For convenience, he will travel all the cities within a certain country and then move to another. In each country, he wants to find out the shortest simple path to traverse through all the cities.[Classic TSP problem]

Q1. Suppose moving to another country costs nothing. Is that still a NP-complete problem? Any suggestion for a proof?

Q2. (There could be connections with certain costs between two cities in two different countries.) Suppose the salesman has to visit the countries (but not cities) in a certain order. Is that still a NP-complete problem? Any suggestion for a proof?

(Precisely, I should have used "NP-hard problem". I hope you can translate it into the decision version.)

Q1: It is intuitive that a general TSP instance can be reduced to this problem for k = 1. What I know about is that you can reduce a general instance of a known NP-c problem into a "special" instance of the problem that you wish to reduce to. However, I think "k=1 (with N)" is a very special case that it is much harder than instances when k > 1. That's why I am not quite certain if this intuitive approach is correct.

"This is a dump of an exercise problem, not a question." I feel sorry for my algorithm teacher. I made this exercise up for a similar situation in my research (not in the area of algorithm for sure). If I let k = 1, the instance got reduced to seems to be the hardest instance which I doubt if it is general. Since the sub-problems are NP-c (in decision version), let the cost for jth country with m cities be $T_j(m)$, then $\Sigma_k T_j(m)$ is much smaller than $T(N)$. Also, the problem is not that hard when k = N. (k is part of the input?)

"Is it NP-complete if its sub-problems are NP-complete?" (So I made up the problem.) If so, that intuitive approach seems correct.

Q2: I think it is kind of related to Q1. I didn't post it because I tend to solve it on my own.(Hints are welcome.) If you solve it, I may have to consider putting you as a coauthor later :).


by huoenter at April 18, 2014 08:32 PM

At what n, does a n^2xn^2 sudoku puzzle take too long to solve? [on hold]

I'm creating a sudoku solver, and I'm wondering at what point, with a simple backtracking sudoku solving algorithm, does the result take way too long to compute? I'm thinking like more than 30 minutes? I'll probably try to implement some heuristics, but I want to know at what point should I not expect a solution within 30 minutes?

by taytay at April 18, 2014 08:24 PM



Efficient lookup when key is made of multiple elements and elements can be empty

I am wanting to create a map where the key contains multiple elements and the elements can be empty/null. The empty values are treated as "anything". I want to lookup function to match when the stored key is the lookup value or it is a generalised version - index key has empties where lookup value has values. I think the formalisation would be "the lookup value logically subsumes the index-key". I also want the lookup function to return the most specific index-key, that is the key with the fewest empties.

For example, if the data is stored in a (<key>, <value>) tuple with the key being a tuple of the elements and ? representing the empty set/null value:

((1, ?, 6, 3), "hey")
((1, 5, 6, 3), "hi")
((2, ?, ?, ?), "hello")

So lookup((2, 4, 5, 6)) -> "hello". And lookup((1, 5, 6, 3)) -> "hi" because (1, 5, 6, 3) is more specific than (1, ?, 6, 3).

A simple solution is to store them as shown above and simply look through them. This would take $O(nm)$ where $n$ is the number of entries and $m$ is the number of elements in the key. Checking in most-to-least specific would mean a match could be returned immediately.

This there an approach that could improve this?

Thank you

by superbriggs at April 18, 2014 08:13 PM



Good text-to-image tools like ditaa or plantuml that would be good for geometry and algebra?

I've been using org-mode more and more, especially as I become more proficient with elisp and calc. I played a bit with ditaa, but what I'm really looking for is a way to create images like what you'd see in a high school algebra or geometry textbook. Plane figures, polygons, circles, ellipses, graphs in the cartesian and/or complex plane, etc. I'd prefer something that integrates fairly easily with Emacs org-mode, that I could put to use in my day job as a high school math teacher. I've got a lot of FOSS tools at hand now: Gimp; Inkscape; LibreOffice Math; etc. Any suggestions?

submitted by LordAgni
[link] [1 comment]

April 18, 2014 08:04 PM


A procedure for Topological sort, proof for its correctness

Definition: A preserved invariant of a state machine is a predicate, $P$, on states, such that whenever $P(q)$ is true of a state, $q$, and $q \rightarrow r$ for some state, $r$, then $P(r)$ holds.

Definition: A line graph is a graph whose edges are all on one path.

Definition: Formally, a state machine is nothing more than a binary relation on a set, except that the elements of the set are called “states,” the relation is called the transition relation, and an arrow in the graph of the transition relation is called a transition. A transition from state $q$ to state $r$ will be written $q \rightarrow r$.

DAG: Directed Acylic Graph

The following procedure can be applied to any directed graph, $G$:

  1. Delete an edge that is in a cycle.
  2. Delete edge $<u \rightarrow v>$ if there is a path from vertex $u$ to vertex $v$ that does not include $<u \rightarrow v>$.
  3. Add edge $<u \rightarrow v>$ if there is no path in either direction between vertex $u$ and vertex $v$.

Repeat these operations until none of them are applicable.

This procedure can be modeled as a state machine. The start state is $G$, and the states are all possible digraphs with the same vertices as $G$.

(b) Prove that if the procedure terminates with a digraph, $H$, then $H$ is a line graph with the same vertices as $G$.

Hint: Show that if $H$ is not a line graph, then some operation must be applicable.

(c) Prove that being a DAG is a preserved invariant of the procedure.

(d) Prove that if $G$ is a DAG and the procedure terminates, then the walk relation of the final line graph is a topological sort of $G$.

Hint: Verify that the predicate $P(u,v)$:: there is a directed path from $u$ to $v$ is a preserved invariant of the procedure, for any two vertices $u, \ v$ of a DAG.

(e) Prove that if $G$ is finite, then the procedure terminates.

Hint: Let $s$ be the number of cycles, $e$ be the number of edges, and $p$ be the number of pairs of vertices with a directed path (in either direction) between them. Note that $p \leq n^2$ where $n$ is the number of vertices of $G$. Find coefficients $a,b,c$ such that as+bp+e+c is nonnegative integer valued and decreases at each transition.

My Problems:

I got stuck with problems $d$ and $e$ but solutions to other problems are welcome too.

At problem $d$, I could not understand the hint and why it is given, how it helps.

In my way for proving $d$, I am trying to show that given procedure always preserves the order of vertices, which are associated with edges, on the start graph $G$. So a line graph is automatically a topological sort since the "precedence order" of the vertices are preserved.

But procedure number $3$ is problematic, how to show it preserves precedence ?

by xxx2000 at April 18, 2014 07:45 PM


In (reduce f val coll), is the val an accumulator?

When you call reduce and pass it a function and two arguments, can the first argument be considered to be an accumulator?

Is it always an accumulator?

Is it sometimes an accumulator?

I was reading a blog entry about using Clojure to parse big files and found this line:

(reduce line-func line-acc (line-seq rdr))

Link to the blog entry:

What about a simple: (reduce + [1 2 3])? Is there an accumulator involved?

I take it my question boils do to: "What is exactly an accumulator?"

But I'd still like to understand too the relation between an accumulator and the reduce function. So any answer to these specific (related) questions are most welcome!

by Cedric Martin at April 18, 2014 07:44 PM


Most efficient algorithm to search an unsorted array with a very precise data structure

(I apologize in advance if this question sounds a bit practical, but I suspect it might have an interesting theoretical aspect.)

I have a (large) array of data, not completely sorted, but with which has a very precise structure, defined as follows.

The array has a length that is a (very large) power of 4.

The data is such that:

  • if the array is split in 4 parts - all of the same number of elements - then in each part the first element is the Minimum of this part, and the last element is the Maximum of this part (minimum < maximum).

  • if we take any one of these parts and, within it, we repeat the subdivision in 4 equal parts, the above fact holds always again for each of the new parts, all the way down, until we arrive at the smallest part, of size 4 elements.

(in other words a sort of "fractal" arrangement, we might perhaps say (?)).


I need to search this array for a given specific value.

  • What would be the most efficient algorithm to perform the search, given the above structure
  • And what is the best complexity I should expect for this task

(I would also like to know if this sort of problem is well known and has a name and there are additional pointers I can read read. Thank you)

by Pam at April 18, 2014 07:38 PM


I wanna build a peer-to-peer chat system. Where do I begin?

Hello /r/compsci

I am a CS student. Summer is here and I want to do something during it. I wanted to make something to replace my reliance on Facebook for keeping in touch with friends. So I decided to build a peer-to-peer chat system.

I have some programming experience but nothing when it comes to building a p2p chat application. I know about the way the internet works (OSI7, TCP/IP, UDP etc.) but I have never programmed in these environments before. I was researching p2p chat systems and the best thing I could find that resembles what I want to make is WASTE. Something like this would be ideal for I need.

Would anyone be kind enough to tell me what to learn in order to make this into reality?

PS: I want to build this from the ground up. This would be a good opportunity to learn new things and get some experience.

submitted by Droidx4_66
[link] [17 comments]

April 18, 2014 07:31 PM


Why are short expiries associated with more pronounced volatility skews?

I've noticed that for a given strike price, the shorter expiration dates of options have more pronounced volatilities

why is that?

by user7265 at April 18, 2014 07:26 PM



Run sequential process with scala future

I have two external processes to be run sequentially:

  val antProc = Process(Seq(antBatch","everythingNoJunit"), new File(scriptDir))

  val bossProc = Process(Seq(bossBatch,"-DcreateConnectionPools=true"))

  val f: Future[Process] = Future {
    println("Run ant...")
  f onSuccess {
    case proc => {
      println("Run boss...")

The result is:

  Run ant...

  Process finished with exit code 0

How do I run antProc until completion, then bossProc?

The following method seems to achieve the purpose. However, it's not a Future approach.


by s.t.nguyen at April 18, 2014 07:21 PM


Time complexity of base conversion


As requested, a single question

Why can't arbitrary base conversion be done as fast as converting from base $b$ to base $b^k$ ?

There is a big time complexity difference, so I am also interested in further reading material about it.

Old. Original question

Conversion between power-2-radix can be done faster than between non-power-of-2 radix, they can be even done in parallel, as every digit (or some groups of them) can be decoded independently of the rest.

For example the binary number 00101001 can be converted to hexadecimal 0x29 nibble by nibble (0010 and 1001), and vice versa (i.e. every hex-digit can be parsed to 4 bits independently of the rest), but doing that conversion from to decimal (or any other non-power-of-2 radix) it's not so easy because digits affects each other.

I've seen time complexity of math operations in wikipedia, and there is also a related question in stackoverflow saying time complexity of conversions of arbitrary digit length to be $\mathcal{O}(M(n) log(n))$

I'm not interested in a "general time complexity bounds for any base conversion" but I would like to know more about the big differences in time complexity between power-of-2 conversions vs any other base conversions.

It's could be a general fact about conversions that can be done faster if they are done between numbers where its bases are power among themselves, not only for 2, but the same to a base 10 to base 100.

Is there any known proof or materials around this ?

by Hernan_eche at April 18, 2014 07:19 PM


Is the complement of a NP-class problem in NP? [on hold]

NP is defined as follows:
NP is the set of all decision problems for which the instances where the answer is "yes" have efficiently verifiable proofs of the fact that the answer is indeed "yes". So, if we look at complement of a problem in NP, all NO instances for this problem are YES instances for the first one and vice versa. So, we have advice for YES inputs in the first problem which can be verified efficiently. In the second version of the problem we have advices for the NO instances which are verifiable in polynomial time. But the definition only considers the YES instances which means the second version won't belong to NP-class. But the two problems are essentially the same.

Basically, I am confused why nothing is said about the NO inputs in the definition. Does it mean that if for a decision problem, every NO instance also can be verified in polynomial time (YES instances can also be verified efficiently), then the problem is in NP? Please help me understand.

by user3286661 at April 18, 2014 07:04 PM


Is it possible to make tramp not block *all of emacs* while waiting for remote servers?

Using tramp to access shells and files on remote computers is awesome, except for one thing -- when the remote server is slow to respond (or worse went down for some reason) tramp blocks all of emacs, which is agonizing.

Does anyone know if there's a way to stop this from happening? Or more precisely how much work would be need to fix this?

submitted by dilap
[link] [12 comments]

April 18, 2014 07:02 PM


Der "Dr. Mengele" der CIA, der Arzt hinter dem Folterprogramm, ...

Der "Dr. Mengele" der CIA, der Arzt hinter dem Folterprogramm, verteidigt sich im Guardian. Er sei ja bloß ein Mann, der von seiner Regierung gebeten worden sei, etwas für sein Land zu tun. Und im Übrigen habe die Folter doch funktioniert, findet er. Er glaubt, der Senatsreport schreibe nur deshalb, dass Folter nicht wirkt, weil sie Angst gehabt hätten, dass dann auch andere Länder Folter gegen Amerikaner anwenden würden. Aber das tun sie ja längst, glaubt er.

Dass so jemand seinen Arbeitgeber George W Bush verteidigen würde, überrascht nicht. Aber das hier möglicherweise:

He also criticized Obama's healthcare policy – a "shit sandwich" – and his administration's approach to global warming. Mitchell believes it's a myth.
Na da passt ja mal wieder alles zusammen. Einen richtigen Wissenschaftler haben sie nicht gefunden, der ihre Drecksarbeit machen würde, ja? Da haben sie halt einen Crackpot aus Florida genommen? Krass.

Der Typ ist übrigens schon 2004 im Report des Inspector General der CIA aufgetaucht.

It said Mitchell and Jessen had "probably misrepresented" their "expertise" as experienced interrogators when pitching coercive techniques to the CIA as a way to obtain actionable intelligence from prisoners.
Nicht die CIA ist auf den zugegangen, der ist auf die CIA zugegangen! Ach du meine Güte!

April 18, 2014 07:01 PM


A regex style matching library for generic matching on list items

I've seen a library like this around before but then forgotten about what it was called.

You can specify a pattern that matches elements in a list, something along the lines of:

(def oddsandevens (pattern (n-of odd? :*) (n-of even? 2) :$))

(pattern-match oddsandevens [1 1 2 2]) => true
(pattern-match oddsandevens [1 1 1 2 2]) => true

(pattern-match oddsandevens [1 1 2 2 2]) => false
(pattern-match oddsandevens [1 1 2]) => false

If I'm totally imagining this, can someone shed light on how one might write one of these things?

by zcaudate at April 18, 2014 06:58 PM



Count number of ways to place ones in an $M \times M$ matrix so that every row and column has $k$ ones?

On math.stackexchange, someone asked how to count the number of ways to place $1$'s into a $10 \times 10$ matrix so that every row and column has $5$ $1$'s. Each element of the matrix must be either zero or one.

I came up with a recursive solution for an $N \times 10$ matrix. Subproblems are indexed by the counts $c_k$ of how many columns have $k$ $1$'s, for $k =0, 1,2,3,4,5$. The counts $c_k$ have to satisfy $\sum_k c_k = 10$, and they also have to satisfy $\sum_k kc_k = 5N$ and $c_k = 0$ for $k > N$. The complexity of this algorithm basically boils down to how many distinct sets of valid indices $(c_k)_k$ there are.

For a $10 \times 10$ matrix I think this approach should work out nicely, but I worry the complexity might get prohibitively large if we wanted to count how many ways to get $M/2$ $1$'s in every row and column of an $M \times M$ matrix. So I'm wondering, is there a more efficient way to solve this counting problem? In other words, a better way than solving for $N \times M$ in increasing order of $N$ and keeping track of subcases indexed by $(c_k)_k$ such that $\sum_k c_k = M$ and $\sum_k k c_k = NM/2$? Also, for my solution, can anybody work out a good bound for how many sub-cases I have as a function of $M$?

by user2566092 at April 18, 2014 06:41 PM


Does Kannan's theorem imply that NEXPTIME^NP ⊄ P/poly?

I was reading a paper of Buhrman and Homer “Superpolynomial Circuits, Almost Sparse Oracles and the Exponential Hierarchy”.

On the bottom of page 2 they remark that the results of Kannan imply that $NEXPTIME^{NP}$ does not have polynomial size circuits. I know that in the exponential time hierarchy, $NEXPTIME^{NP}$ is just $\Sigma_2EXP$, and I also know that Kannan's result is that $\forall c\mbox{ }\exists L\in\Sigma_2P$ such that $L \not\in Size(n^c)$. Of course, Kannan's theorem is NOT saying $\Sigma_2P \not\subset P/poly$ (in order for that to be the case we would need to show that $\exists L\in\Sigma_2P$ such that $\forall c$, $L \not\in Size(n^c)$. However, I don't see how Kannan's result implies that $NEXPTIME^{NP} \not\subset P/poly$?

by Lorraine at April 18, 2014 06:39 PM


How to perform Empirical Mode Decomposition?

I am trying to use the EMD applied to EURUSD open price to train a machine learning algo (RVM).

I have run only once the EMD on my training set and once on the training+test set.

The results on the test sets only are quite good. However when I apply the algo on the last sample only the predictions are bad.

Shall I run the EMD on each sample of my training set using a sliding window ?

I understand EMD is non-causal, but can it be used in some ways for training a machine learning algo ?

by philcta at April 18, 2014 06:35 PM


Scala Override Return Type

I have a task in which I need to override the return type of a method. The problem is the method is called yet the return type is not overridden. Please help!

abstract class Parser
   type T1 <: Any;
   def test1(): T1;

   type T2 <: String;
   def test2(): T2;
//class path: parser.ParserA
class ParserA extends Parser
   type T1 = String;
   override def test1(): T1=
       return "a,b,c";
   type T2 = String;
   override def test2(): T2=
       return "a,b,c";

//some where in the project
val pa = Class.forName("parser.ParserA").newInstance().asInstanceOf[Parser];
println(pa.test1().length());// error: value length is not a member of pa.TYPE_4
println(pa.test2().length());// this works, print 5;

Please Help! Thank you in advance!

by user2668751 at April 18, 2014 06:34 PM



SBT create sub-projects using a collection

I've been searching if this is possible for a while with little success.

Using SBT, can you create a sub-project programmatically, without explicitly assigning each project to it's own val?

My current project structure looks something like this:

    common/ <--- This is another sub-project that others dependOn
    apps/ <--- sub-projects live here

Sub1 and Sub2 are both their own SBT projects.

My first attempt to link these projects together looked like this:

// root/project/build.scala
import sbt._
import Keys._
object build extends Build {
  lazy val common = project /* Pseudo-code */
  val names = List("Sub1", "Sub2")
  lazy val deps = names map { name =>
    Project(id = name, base = file(s"apps/$name")).dependsOn(common)

  lazy val finalDeps = common :: deps
  lazy val root =".")).aggregate( :_*)
                 .dependsOn(, None)) :_*)

However, because SBT uses reflection to build it's projects and sub-projects, this doesn't work.

It only works if each sub-project is stated explicitly:

lazy val Sub1 ="apps/Sub1"))

So the question:

Is there a way to programmatically build sub-project dependencies in SBT?

by Snnappie at April 18, 2014 06:14 PM



Extend Scala Set with concrete type

Really struggling to figure out extending the immutable Set with a class that will represent a Set of concrete type. I'm doing this to try and create a nice DSL.

I'd like to have a class Thing, and when you add 'things' together you get a ThingSet object, which extends Set.

class Thing(val name:String){
  def +(other: Thing):ThingSet = new ThingSet() + other

I just can't figure out how to make the ThingSet object. I know I need to mix in traits like GenericSetTemplate, SetLike etc. But I just can't make it work.

Please, can anybody give me some pointers, as I can't find anything explicit enough to learn from. I've tried looking at the BitSet and HashSet implementations, but get lost.

by user523071 at April 18, 2014 05:59 PM


Digital systems: Bidirectional shift register?

Can anyone walk me through the design of a bidirectional shift register with a TTL 74175?

submitted by Wrong_Burgundy
[link] [comment]

April 18, 2014 05:55 PM


Convert a document with Play Json

I have a list of courses from coursera api:

    "name":"Contraception: Choices, Culture and Consequences",

I want to convert it to a document that look like so ( i use <--- arrrows as comments):

      "Type" : "Course",
      "Title" : "contraception"    <---- short name
      "Content" : {"id":69,        <---- the original course
             "name":"Contraception: Choices, Culture and Consequences",

Is it possible to perform this with json only api from play? Here is how I do it presently (with conversion to scala lists).

val courses = (response.json \ "elements")
      .map { course =>
      // This is how we want our document to look
        "Type" -> "Course",
        "Provider" -> "Coursera",
        "Title" -> (course \ "name"),
        "Content" -> course
 // then put this into the final json object with "Courses" ...

by drozzy at April 18, 2014 05:53 PM


How to prove that any minimum vertex cover of a clique of size $n$ must have exactly $n-1$ vertices? [on hold]

I would appreciate any help with this question... I don't know how to prove Np-completeness when it's given with a new problem!!!

How to prove that any minimum vertex cover of a clique of size $n$ must have exactly $n-1$ vertices?

by user3047166 at April 18, 2014 05:48 PM


Why did the Scala language choose the names eq and ne to compare references? [on hold]

I understand that Scala choose == and != as the replacement of equals(in most cases). I feel the Scala's == is like the JavaScript's.

For the same reason, I would expect Scala choose === and !== to replace Java's == and !=, like JavaScript did.

But Scala surprised me by choosing eq and ne for reference comparison. These names seem coming from Lisp, which are not commonly seen in a curly-bracket language like C/C++/Java/JavaScript/C#.

So, why did they choose the names eq and ne to compare references?

by user955091 at April 18, 2014 05:35 PM



please help me to identify the state of the art algorithms in depth first branch and bound

i am thinking of undertaking a research project in constraint optimization. the problem is a combinatorial search in a tree with a fixed goal depth. among the approaches that i am considering is depth first branch and bound.

i'd like to know what the current state of the art is in branch and bound depth first search before i rule out or pursue this avenue. i'm interested in such things as value ordering (i.e. which node to expand next) and cost modeling of leaves to inform the search heuristic.

by user20930 at April 18, 2014 05:10 PM




Clojure questions by a newbie: how do you package/distribute? how do you do make desktop applications?

How do you package Clojure applications for end-users that are not developers?

Is it easy/possible to make desktop applications with Clojure?

submitted by TheMagicHorsey
[link] [7 comments]

April 18, 2014 05:03 PM


Die Amerikaner wollen Russlands neuestem Spionageflugzeug ...

Die Amerikaner wollen Russlands neuestem Spionageflugzeug keine Überflugerlaubnis erteilen. Es gibt da seit 1992 (aber erst 2002 unterzeichnet) einen Vertrag als Teil der Atomabrüstung, der es den Teilnehmerländern erlaubt, mit ihren Spionagefliegern und den jeweils besten Sensoren über die anderen Länder zu fliegen, damit sie unabhängig prüfen können, ob die heimlich atomar aufrüsten, oder ihr Militär mobilisieren oder herumkarren. Und jetzt haben die Russen offenbar einen neuen Sensor am Start, der die US-Militärs so in Sorge versetzt, dass sie das nicht mehr zulassen wollen.

April 18, 2014 05:01 PM


Benefit of defining a trait whose purpose is only to mix into an object?

In scalaZ (file Id.scala), the context is like the following

/** Mixed into object `Id` in the package object [[scalaz]]. */
trait IdInstances {


object Id extends IdInstances

So why not directly use the following without first defining IdInstances? What's the benefit of doing like in Id.scala?

object Id {
   ...//with the same context from within IdInstances

by Daniel Wu at April 18, 2014 04:59 PM

How can I have an optional Sbt setting?

There is a project shared with multiple participants. Some participants installed a global sbteclipse at ~/.sbt/0.13/plugins/plugins.sbt, while other participants didn't.

I want to put some sbt settings in the project's build.sbt, like:

EclipseKeys.createSrc := EclipseCreateSrc.Unmanaged + EclipseCreateSrc.Managed + EclipseCreateSrc.Source

I wish to apply these settings only for those participants who have installed a global sbteclipse, and do not affect others.

How can I achieve that?

by user955091 at April 18, 2014 04:58 PM

sbt eclipse command changed files and packages

I created a new Scala project in eclipse then added a package and Scala object , So far so good ...

i want to add external library so i added a project folder with plugins.sbt files,and another file build.sbt in the root project.

in the terminal i compiled successfully the project with the sbt compile task.

the problem is that after sbt eclipse command the eclipse project changed from Scala project to something else.... all the packages changed to simple folders and the Scala project is ruined

  • scala IDE :Build id: 3.0.3-20140327-1716-Typesafe
  • scala version :2.10.4
  • sbt version:0.13.0

you can see in the image enter image description here

by MIkCode at April 18, 2014 04:50 PM

Handling connection failures in apache-camel

I am writing an apache-camel RabbitMQ consumer. I would like to react somehow to connection problems (i.e. try to reconnect). Is it possible to configure apache-camel to automatically reconnect?

If not, how can I find out that a connection to the queue was interrupted? I've done the following test:

  • start the queue (and some producer)
  • start my consumer (it was getting messages as expected)
  • stop the queue (the messages stopped arriving, as expected, but no exception was thrown)
  • start the queue (no new messages were received)

I am using camel in Scala (via akka-camel), but a Java solution would be probably also OK

by jfu at April 18, 2014 04:37 PM



scala pickling in the nontrivial case

Pickle that?

abstract class T
case class F (val l: List [T]) extends T
//F (val l: [...Seq, Vector, Array...] [T])
case class I (val i: Int)extends T
val p = F(List(I(1),F(List(I(2),F(List(I(1),F(List(I(2))))))))).pickle
val s = p.value
val u = s.unpickle[F]  // [T]

and what is happening?

[error] ...
[error] ... 
[error] ... 


The good thing about scala/pickling is that it works in some cases.

by user3464741 at April 18, 2014 04:37 PM

Can't configure SBT dependency: object XYZ is not a member of package

I am working with IntelliJ IDEA 13.1.1 in order to set up a Scala SBT Project. My build.sbt looks like this:

name := "MyProject"

version := "1.0"

scalaVersion := "2.10.4"

libraryDependencies ++= Seq (
  "org.scala-lang" % "scala-swing" % "2.10.4",
  "org.jfree" % "jfreechart" % "1.0.17"

As for scala-swing it perfectly works in my project. But there are problems in following line:

import{XYDataset, DefaultXYDataset}

There is no error syntax highlighting in IDEA but it still says following durring compilation:

Error:(6, 12) object jfree is not a member of package org import{XYDataset, DefaultXYDataset}

If I add the same jfreechart library using Maven it does compile. But I really want to have everything set up just with SBT. Does anyone know the solution?

by kreide at April 18, 2014 04:17 PM

Scheme - Help Writing A Function

I'm trying to write a function in Scheme that takes two strings as input and returns a list of all optimal pairs of strings. In order to do this, I know that I need to make use of the following functions that I have already written. The functions already written will obviously need to be used as helper functions for the function that I'm trying to write.

1. alignment-score-tail

This function takes two strings, and scores each character according to a scoring criteria and accumulates the result. If two characters are equal, a score of 2 is obtained, if two characters are not equal, a score of -1 is obtained, and finally, if one character is an underscore, and the other character something elses, a score of -2 is obtained.

Here is example input/output:

> (alignment-score-tail "x" "x")

> (alignment-score-tail "x" "y")

> (alignment-score-tail "x" "_")

> (alignment-score-tail "Hello" "__low")

2. change-string-pairs

This function takes two chars (a and b, say) and a list of pairs of strings as input, and returns a modified list of pairs of strings: for each string pair in the list.

Here is example input/output:

> (change-string-pairs "a" "b" '(("one" "two")("three" "four")("five" "six")))
(("aone" "btwo") ("athree" "bfour") ("afive" "bsix"))

3. get-best-pairs

This function takes both a scoring function (scoring function in this case will be alignment-score-tail, which is described above) and a list of pairs of strings as input and then returns a modified list of pairs of strings. The returned list will contain all the optimal string pairs from the input, scored according to the input function.

Here is example input/output:

> (get-best-pairs alignment-score-tail '(("hello" "b_low")("hello_" "b_l_ow")("hello" "_blow")("hello" "blow")("h_e_llo" "bl_o__w")))
(("hello" "b_low") ("hello_" "b_l_ow") ("hello" "_blow"))

Having all these functions described above that I have already written, the function that I'm trying to write using those functions will have the following:


> (get-all-best-pairs "hello" "blow")
(("hello" "b_low") 
 ("hello_" "b_l_ow") 
 ("hello_" "b__low") 
 ("hello" "_blow") 
 ("hello_" "_bl_ow") 
 ("hello_" "_b_low") 
 ("hello_" "__blow"))

Would be really great, if I can see how this can be done. Also, in my functions that I have written above, I have made use of some built in Scheme functions like map, filter, append and apply.

I also am aware, that the algorithm is extremely inefficient, and is of exponential complexity. That is not of a concern to me at this time.

by Curiosity at April 18, 2014 04:13 PM

High Scalability

Stuff The Internet Says On Scalability For April 18th, 2014

Hey, it's HighScalability time:

Scaling to the top of "Bun Mountain" in Hong Kong
  • 44 trillion gigabytes: size of the digital universe by 2020; 6 Times: we have six times more "stuff" than the generation before us.
  • Quotable Quotes:
    • Facebook: Our warehouse stores upwards of 300 PB of Hive data, with an incoming daily rate of about 600 TB.
    • @windley: The problem with the Internet of Things is right now it’s more like the CompuServe of Things
    • Chip Overclock: If you want to eventually generate revenue, you must first optimize for developer productivity; everything else is negotiable.
    • @igrigorik: if you are gzipping your images.. you're doing it wrong:  - check your server configs! and your CDN... :)
    • Seth Lloyd: The arrow of time is an arrow of increasing correlations.
    • @kitmacgillivray: When will Google enable / require all android apps to have full deep search integration so all content is visible to the engine?
    • Neal Ford: Yesterday's best practices become tomorrow's anti-patterns.
    • Rüdiger Möller: just made a quick sum up of concurrency related issues we had in a 7-15 developer team soft realtime application (clustered inmemory data grid + GUI front end). 95% of threads created are not to improve throughput but to avoid blocking (e.g. IO, don't block messaging receiver thread, avoid blocking the event thread in a GUI app, ..).
    • Ansible: When done correctly, automation tools are giving them time back -- and helping out of this problem of needing to wear many hats.

  • Amazon and Google are in an epic battle to dominate the cloud—and Amazon may already have won: If Amazon’s entire public cloud were a single computer, it would have five times more capacity than those of its next biggest 14 competitors—including Google—combined. Every day, one-third of people who use the internet visit a site or use a service running on Amazon’s cloud.

  • What books would you select to help sustain or rebuild civilization? Here's Neal Stephenson’s list. He was too shy to do so, but I would certainly add his books to my list. 

  • 12-Step Program for Scaling Web Applications on PostgreSQL. Great write up of lessons learned by that they used to sustain 10s of thousand of concurrent users at 3K req/sec. Highly detailed. 1) Add more cache, 2) Optimize SQL, 3) Upgrade Hardware and RAM, 4) Scale reads by replication, 5) Use more appropriate tools, 6) Move write heave table out, 7) Tune Postures and your File System, 8) Buffer and serialize frequent updates, 9) Optimize DB Schema, 10) Shard busy tables vertically, 11) Wrap busy tables with services, 12) Shard services backend horizontally.

  • Is this a soap opera? It turns out Google and not Facebook is buying Titan Aerospace, makers of high flying solar powered drones. Google's fiber network would make a great backbone network for a drone and loon powered wireless network, completely routing around the telcos.

  • Building Carousel, Part I: How we made our networked mobile app feel fast and local. Dropbox changes to an eventualy consistent / optimistic replication syncing model to make their app "feel fast, responsive, and local, even though the data on which users operate is ultimately backed by the Dropbox servers."  Lesson: don’t block the user! Instead of requiring changes to be propagated to the server synchronously. Local and remote photos are treated as equivalent objects. Actions take effect immediately locally then work there way out globally. Changes are used to stay consistent. A fast hash is used to tell what photos have not been backed up to dropbox. Remote operations happen asynchronously.

Don't miss all that the Internet has to say on Scalability, click below and become eventually consistent with all scalability knowledge...

by Todd Hoff at April 18, 2014 03:56 PM



SBT method show progress

In SBT (I am using 0.13.0) there is sbt.IO object that provides some helpful methods. One can for example download files from internet like this: URL(...), file(...))  //my program freezes until end of this method

I am writing a sbt plugin and want to download some files from internet. I want to somehow show progress bar during download. That would be nice informing user that program still works showing him some info.

How would you do this?

by pawel.panasewicz at April 18, 2014 03:38 PM


Prove that the set cover (U,F) is NP-hard? [on hold]

Prove that the set cover problem (U,F) is NP-hard, even if for each element from U, we have at most two subsets from the family F that cover this element

by user3047166 at April 18, 2014 03:29 PM


One week of OpenSSL cleanup

After the news of heartbleed broke early last week, the OpenBSD team dove in and started axing it up into shape. Leading this effort are Ted Unangst (tedu@) and Miod Vallat (miod@), who are head-to-head on a pure commit count basis with both having around 50 commits in this part of the tree in the week since Ted's first commit in this area. They are followed closely by Joel Sing (jsing@) who is systematically going through every nook and cranny and applying some basic KNF. Next in line are Theo de Raadt (deraadt@) and Bob Beck (beck@) who've been both doing a lot of cleanup, ripping out weird layers of abstraction for standard system or library calls.

Then Jonathan Grey (jsg@) and Reyk Flöter (reyk@) come next, followed by a group of late starters. Also, an honorable mention for Christian Weisgerber (naddy@), who has been fixing issues in ports related to this work.

All combined, there've been over 250 commits cleaning up OpenSSL. In one week. Some of these are simple or small changes, while other commits carry more weight. Of course, occasionally mistakes get made but these are also quickly fixed again, but the general direction is clear: move the tree forward towards a better, more readable, less buggy crypto library.

April 18, 2014 03:28 PM


scalac for Call-by-Name use references

I have some function:

def f(x: Int) = x * x

and then I call it:

var y = 0
f { y += 1; y }

Bytecode generated for above code looks like:

     0: iconst_0      
     1: istore_1      
     2: aload_0       
     3: iload_1       
     4: iconst_1      
     5: iadd          
     6: istore_1      
     7: iload_1       
     8: invokevirtual #18                 // Method f:(I)I
    11: pop           
    12: return

If I change function def f(x: Int) to represent Call-by-Name:

def f(x: => Int) = x * x

generated bytecode for the same part of code looks like:

     0: new           #24                 // class scala/runtime/IntRef
     3: dup           
     4: iconst_0      
     5: invokespecial #28                 // Method scala/runtime/IntRef."<init>":(I)V
     8: astore_1      
     9: aload_0

My question is:

Is it a rule that for Call-by-Name we oparate on references or it depends on semantic analysis phase in compilation?

by piobab at April 18, 2014 03:28 PM

UnsatisfiedLinkError with native library under sbt

I'm using sbt 0.13 and have issues using the leveldbjni native library under sbt (even after issue #358 has been resolved). A similar issue has already been reported for which sbt 0.13 should provide a solution but it seems it doesn't. So I'm sharing my observations here.

I'm getting an UnsatisfiedLinkError with the following example application.

  • build.sbt

    name := "example"
    version := "0.1"
    scalaVersion := "2.10.2"
    libraryDependencies += "org.fusesource.leveldbjni" % "leveldbjni-all" % "1.7"

  • Example.scala

    import org.fusesource.leveldbjni.internal._
    object Example extends App {
      NativeDB.LIBRARY.load() // loading succeeds 
      new NativeOptions() // UnsatisfiedLinkError under sbt

I'm using Oracle JDK 1.7 and OS X 10.8.5. Running the example with run-main Example under sbt gives

[error] (run-main) java.lang.UnsatisfiedLinkError: org.fusesource.leveldbjni.internal.NativeOptions.init()V

whereas running it with

java -cp scala-library.jar:example_2.10-0.1.jar:leveldbjni-all-1.7.jar Example

just works fine. The application even runs successfully when Scala is on the bootclasspath:

java -Xbootclasspath/a:scala-library.jar -cp example_2.10-0.1.jar:leveldbjni-all-1.7.jar Example

Any ideas why there's an UnsatisfiedLinkError only under sbt?

by Martin Krasser at April 18, 2014 03:26 PM

Planet Theory


Time for a short rundown of announcements.

  • STOC will be held May 31-June 3 in New York City. Early registration and hotel deadline is April 30. Student travel support requests due by this Monday.
  • The newly renamed ACM Conference on Economics and Computation (EC '14) will be held in Palo Alto June 8-12. Early registration deadline is May 15. Hotel deadline is May 19th but the organizers suggest booking early because Stanford graduation is June 13.
  • The Conference on Computational Complexity will be held June 11-13 in Vancouver. Local arrangements information will be posted when available.
  • The ACM Transactions on Algorithms is searching for a new Editor-in-Chief. Nominations due May 16.
  • Several of the ACM Awards have been announced. Robert Blumofe and Charles Leiserson will receive the Paris Kanellakis Theory and Practice Award for their "contributions to robust parallel and distributed computing."
  • Belated congratulations to new Sloan Fellows Nina Balcan, Nayantara Bhatnagar, Sharon Goldberg, Sasha Sherstov, David Steurer and Paul Valiant.

by Lance Fortnow ( at April 18, 2014 03:25 PM


Complete combinator basis for System F-omega

The S and K combinators form a complete (and Turing complete) basis when untyped. Within the Hindley-Milner type-system, and I believe within system $F$ as well, S and K can encode any well-typed function and, with the addition of the Y combinator, you gain Turing completeness (and adding other recursion combinations yields different recursive classes).

I highly suspect that the same can be done for System $F_{\omega}$ but I'm not sure how quite to do this. Is there a set of combinators that form a complete basis typed under system F-omega?

Additionally, in system $F$ type checking is only decidable with type hints (Church style). Would a combinator basis for system $F$ still have decidable type checking? If true would this still also be true of a complete basis for system $F_\omega$?

What about complete basis and decidability of type checking for Per Martin-Löf typing systems?

by Jake at April 18, 2014 03:24 PM


java.lang.NoClassDefFoundError when running Scala JUnit Test on Scala IDE (Eclipse Kepler)

I've recently decided to install Scala IDE 3.0.3 (which is basicly Eclipse Kepler with scala plugin). I've newest specs (specs2_2.10-23.11), scalaz (2.10-7.0.4) and collection (scalaj-collection_2.10-1.5).

I tried to run my tests in scala using "Scala JUnit Test" but i got this error

java.lang.NoClassDefFoundError: scalaz/concurrent/Strategy$ at org.specs2.reporter.DefaultExecutionStrategy$$anonfun$execute$1$$anonfun$2.apply(ExecutionStrategy.scala:43) at org.specs2.reporter.DefaultExecutionStrategy$$anonfun$execute$1$$anonfun$2.apply(ExecutionStrategy.scala:41) at scala.collection.LinearSeqOptimized$class.foldLeft(LinearSeqOptimized.scala:111) at scala.collection.immutable.List.foldLeft(List.scala:84) at org.specs2.reporter.DefaultExecutionStrategy$$anonfun$execute$1.apply(ExecutionStrategy.scala:41) at org.specs2.reporter.DefaultExecutionStrategy$$anonfun$execute$1.apply(ExecutionStrategy.scala:38) at scalaz.syntax.IdOps$class.$bar$greater(IdOps.scala:15) at scalaz.syntax.ToIdOps$$anon$1.$bar$greater(IdOps.scala:78) at org.specs2.reporter.JUnitReporter$ at org.specs2.runner.JUnitRunner$$anon$ at at at at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests( at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests( at at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.main(

Caused by: java.lang.ClassNotFoundException: scalaz.concurrent.Strategy$ at$ Source) at Method) at Source) at java.lang.ClassLoader.loadClass(Unknown Source) at sun.misc.Launcher$AppClassLoader.loadClass(Unknown Source) at java.lang.ClassLoader.loadClass(Unknown Source) ... 17 more

What causing that? I probably missing something but I can't find what.

My tests are running just fine with gradle.

by ojciecmatki at April 18, 2014 03:09 PM

Re-def vars in Clojure(script)

I'm trying to find the idiomatic way to defer the initialization of a var (which I really intend to be immutable).

(def foo nil)
(defn init []
  ; (def foo (some-function))
  ; (set! foo (some-function)))

I know Rich Hickey said re-defing isn't idiomatic. Is set! appropriate here?

by shaun at April 18, 2014 03:08 PM

Portland Pattern Repository



log4j properties files based on leiningen test metadata?

How can I use different log4j properties files based on leiningen test metadata? I have functions that have debug logging output to a file. Often, there is a lot of data being written to this debug log file, slowing down the function. Normal runs of the application will not have debug file writing, so I want to benchmark the normal running of the function without that file writing. For benchmarking, I am using criterium. Let's assume that the metadata for benchmarking deftest defitions is :benchmark.

by user1559027 at April 18, 2014 02:58 PM


Default kill-buffer behavior

The default kill-buffer behavior (C-x k RET) opens the last buffer visited which currently isn't already opened somewhere else.

Can I change this behavior so that it will repoen the last buffer in that window regardless of whether that buffer is already opened in some other frame?

Also, how could I figure this kind of thing out on my own? I looked here, here and here but couldn't find any information about this.

Thanks, -E

submitted by 23421543
[link] [11 comments]

April 18, 2014 02:48 PM


GrrCon - Oct 16-17 2014 - Grand Rapids, MI

Hey all,

Just wanted to make all you midwestern Redditors aware of GrrCon, happening in October. If you are a student and you register for tickets with your .edu address, you can get 100 bucks off the ticket price (regular is $150, student is $50). It's an awesome conference filled with a great, international speaker lineup. If you have any other questions about the student pricing go to or toss me a DM.

There are two awesome keynotes this year by Dave Kennedy and Jayson Street.

Ticket price includes food, beer, and access to the after party featuring Henry Rollins.

From the about page: GrrCON is an information security and hacking conference put together to provide the community with a venue to come together and share ideas, information, solutions, forge relationships, and most importantly engage with like minded people in a fun atmosphere without all the elitist “Diva” nonsense. We bring together the CISO, the hacker, the security practitioner, and the researcher in a one-of-a-kind experience you CANNOT get elsewhere.

We provide three+ presentation tracks, in-con workshops, pre-con training, and a solutions arena to ensure you get the most out of the event. Come join the conversation.

GrrCon: No egos, No Divas, just a good time and good content.


I was just made aware of /r/grrcon that was created by /u/quizbuk to track the CTF and other things relating to the conference.

submitted by b31tf4ce
[link] [17 comments]

April 18, 2014 02:43 PM


Linked lists and patterns python [migrated]

Trying to write a function that will iterate over the linked list, sum up all of the odd numbers and then display the sum. Here is what I have so far:

def main():
   array = eval(input("Give me an array of numbers: "))

def isOdd(x):
    return x % 2 != 0

def sumOdds(array):
    if (array == None):
        return 0
    elif (isOdd(head(array))):
        return head(array) + sumOdds(tail(array))
        return sumOdds(tail(array))

I can't get it to actually print the sum though. Can anybody help me out with that?

Here is the output of the program when I run it:

$ python3
Give me an array of numbers: [11, 5, 3, 51]
Traceback (most recent call last):
  File "", line 22, in <module>
  File "", line 10, in main
  File "", line 19, in sumOdds
    return head(array) + sumOdds(tail(array))
  File "", line 18, in sumOdds
    elif (isOdd(head(array))):
  File "/Users/~/cs150/practice3/friday/", line 34, in head
    return NodeValue(items)
  File "/Users/~/cs150/practice3/friday/", line 12, in NodeValue
    def NodeValue(n): return n[0]
  TypeError: 'int' object is not subscriptable

by Billy at April 18, 2014 02:41 PM


In Scala, how to test the type of an 'Any' object against a type with type parameter?

I am trying to get a type-safe way of converting the result of parsing a JSON string. I want to check whether a field is Map[String, any] or a plain string. My first attempt is

def test(x:Any) = {
    x match {
        case m:Map[String,Any] => ...

This causes "non-variable type argument String in type pattern Map[String,Any] is unchecked since it is eliminated by erasure"

Looking through the document of TypeTag and ClassTag, I could not find a good way to accomplish that. The following code does not cause the warning, but I wonder why it works.

type StringMap = Map[String,Any]
def test(x:Any) = {
    x match {
        case m:StringMap => ...

by Zhi Han at April 18, 2014 02:39 PM


What can Idris not do by giving up Turing completeness?

I know that Idris has dependent types but isn't turing complete. What can it not do by giving up Turing completeness, and is this related to having dependent types?

I guess this is quite a specific question, but I don't know a huge amount about dependent types and related type systems.

by MrBones at April 18, 2014 02:35 PM

Convex Hull algorithm - why it can't be computed using only comparisons

Say I want to compute a covnex hull of given points on the plane. I would like to write an algorithm, that only compares the points and doesn't do any arithmetic operations. Wikipedia states, that:

The standard $\Omega(n \log n)$ lower bound for sorting is proven in the decision tree model of computing, in which only numerical comparisons but not arithmetic operations can be performed; however, in this model, convex hulls cannot be computed at all.

Why is it so? I can't find any justification for it anywhere, I know it to be intuitively true, but how come it's a necessity?

by Arek Krawczyk at April 18, 2014 02:32 PM


Notion of 'order type' for non-well ordered sets?

Given a well-ordered poset $P = (S,\leq)$, it is possible to define its order type as the supremum of the order types of its linear extensions; this is well-defined as shown in 'Well-partial orderings and hierarchies' by de Jongh & Parikh.

Suppose now that we have a poset $P$ that is non-well ordered. In general it is possible to embed 'arbitrarily complex' well-orders in $P$, e.g. if $P$ is $(\mathbb{R},\leq)$ then all countable ordinals can be embedded in $P$. Is it still possible to assign an order type $\lambda$ to $P$ such that every 'reasonable embedding' of a well-order into $P$ has order type $\leq \lambda$?

The most basic situation in which I'm interested is the poset of finite permutations ordered by pattern involvement. It seems intuitively that the well-ordered posets obtainable in this way cannot be too complex - for instance, we can embed the plane trees ordered by topological minors, which have order type $\epsilon_0$ if I'm correct.

by Grid_y_Bill at April 18, 2014 02:28 PM

Dave Winer

WordPress-to-OPML source

As promised, I have released the source for the server that converts a WordPress blog into a single Fargo-editable outline. It's written in JavaScript and runs in node.js.

The format is OPML, which has many other uses.

It's provided under the MIT license.

BTW, you'll find a link to this server and all my other source releases in the GitHub menu at the top of every post on this blog.

April 18, 2014 02:27 PM


generic case class to json using writter

I am trying to convert this model to Json but I always got the error "No apply function found matching unapply parameters".

I tried to implement two different writters for doing this but both do not work.

Here is my model:

case class Page[T](
    var data: List[T],
    var previous: String,
    var next: String,
    var totalPageCount: Int)(implicit val tWrites: Writes[T])

object PageScala {

    // Both Writters generate an "No apply function found matching unapply parameters" error
    implicit val pageWrites = Json.writes[Page[_]]

    implicit def pageWriter[T]: Writes[PageScala[T]] = new Writes[PageScala[T]] {
        def writes(o: PageScala[T]): JsValue = {
            implicit val tWrites = o.tWrites
            val writes = Json.writes[PageScala[T]]

Does anyone has a solution ?

by Christophe at April 18, 2014 02:11 PM


Question about Kannan's theorem [migrated]

I was reading a paper of Buhrman and Homer "Superpolynomial Circuits, Almost Sparse Oracles and the Exponential Hierarchy" (

On the bottom of page 2 they remark that the results of Kannan imply that $NEXPTIME^{NP}$ does not have polynomial size circuits. I know that in the exponential time hierarchy, $NEXPTIME^{NP}$ is just $\Sigma_2EXP$, and I also know that Kannan's result is that $\forall c\mbox{ }\exists L\in\Sigma_2P$ such that $L \not\in Size(n^c)$. Of course, Kannan's theorem is NOT saying $\Sigma_2P \not\subset P/poly$ (in order for that to be the case we would need to show that $\exists L\in\Sigma_2P$ such that $\forall c$, $L \not\in Size(n^c)$. However, I don't see how Kannan's result implies that $NEXPTIME^{NP} \not\subset P/poly$?

by Lorraine at April 18, 2014 02:10 PM


How many CS professionals would benefit from mentoring/course/program about solving the most challenging business problems? (Validating this, I need your answers!)

I started out really technical in CS, I have seen my value shift drastically from technical expertise towards helping people in organizations to solve their challenging business problems using custom developed (or enhanced) software systems. Specifically, I mean business process problems such as “developing customer relationships,” “reporting sales,” and “managing customer orders.”

I wonder if there are others out there who might want to apply their technical skills more closely to people’s underlying problems, but just feel stuck, don't know how/where to get started.

If a mentoring/course/program opportunity (format TBD) existed to help CS professionals to better relate their work to the people who will use the system, would you be interested? (Yes=Up Vote No=Down Vote).

I botched my earlier explanation, so I’m providing some additional detail on potential subjects that could be included based on this feedback. If nothing comes of this, then at least this list of subjects could provide a basis for people to do their own exploration.

Stakeholders- know the people and organizations who will use and interact with the system you are developing, and how they will ultimately judge your deliverables

  • explore the people who will interact with the system, and the environments in which they operate

  • identify and understand stakeholder incentives, and how they impact the system behavior and performance

  • visualize stakeholder interactions, including unanticipated conflicts that may impact your work

  • interact with business process experts, and understand how the developed system will actually be used

  • gauge the business benefits are important to your stakeholders, and identify the value that they will place on it

Technical features- the details of your work, and how they impact your stakeholders

  • select features and specifications that actually deliver value based on stakeholder benefits

  • simultaneously compare advantages and disadvantages of multiple features, including unexpected interactions

  • present combinations of features to your stockholders, and clearly communicate the impact to them

  • discuss technical details with your stakeholders in ways that they will clearly communicate the business impact

Concepts and testing- the results of your work, and how it will be received by your stakeholders

  • facilitate the selection of different system concepts and prototypes based on stockholder impact

  • anticipate business and technical risks before they become deliverable crises

  • design "real options" to accommodate unavoidable future uncertainties that might impact your client

  • design testing scenarios and plans based on your stakeholders realistic business process and environment

  • negotiate testing criteria that provide measurable goals for predictable completion of your work

Others pointed out that academic options exist to serve this purpose, which is an excellent point. I have engineering and business degrees from Cornell and MIT, so I’m fairly aware of the established material and overlaps that exist.

Also, I know this might not be for everyone. That is ok. If possible, please let me know why you would or would not want it, or any other feedback. Thanks!

submitted by surfman49
[link] [24 comments]

April 18, 2014 01:40 PM


Help in NP-Hardness proof of a certain type of Class Cover problem

Class Cover Problem is nothing but finding an optimal cover of certain class (Point Set) with a particular shape only i.e. finding minimum number of a certain shaped polygon (for example, rectangle) required to cover the Point Set (S).

Well, covering by Rectangles is proved to be NP-Hard and now, I want to consider a different shape say L-shape and now find out covering by it.

So, can somebody help me in proving that this problem is NP-Hard too. I've worked on it a bit, tried reducing it from known NP-Hard problem but couldn't quite get it.

by srkaysh at April 18, 2014 01:36 PM


if technical analysis rules for predict stock prices is unique for all cases, why should we learn neural networks? [on hold]

Is there any neural network out of the box tool that was already learned all technical rules by feeding many stock trading data?

by Esmaeilzadeh at April 18, 2014 01:36 PM


Does a location aware Bloom Filter exist?

The standard Bloom filter does not admit false negatives, but allows false positives. These false positives are meant to be evenly distributed (I think, at least they don't appear to usefully cluster in my trials). What if I had a binary image (or anything else spatial). Then I care if it mispredicts a 1 in the area filled with 0s, but I don't really care if it mispredicts a 1 in an area filled with 1s.

Does such spatially aware sketch data-structure exist?

submitted by keije
[link] [7 comments]

April 18, 2014 01:23 PM




Scala deep type cheking

We have a function that can returns anything:

def func: AnyRef

And we need to check if return value is a

Tuple2[String, String] 


List[Tuple2[String, List[String]]] 


List[Tuple2[String, List[Int]]] 

or anything else.

What is the right way to do that?

by Lemon Tree at April 18, 2014 12:55 PM

Fred Wilson

Feature Friday: Comedy On SoundCloud

Where is the next Howard Stern going to emerge? I don’t think it will be terrestrial or satellite radio. I think its more likely that he or she will emerge from a place like our portfolio company SoundCloud. There is a ton of comedy on SoundCloud and its growing very fast. But discovery has been a problem.

In the most recent Android release, SoundCloud has introduced some very nice discovery features. These features also exist on the web and will be coming to iOS soon. Since the way we most likely want to listen to the next Howard Stern is by bluetoothing our phone to our car when we are driving to and from work, I will show you how to listen to comedy on SoundCloud using the Android app flow. It is very similar on the web.

First, you open up the app menu by tapping on the upper left of the app and get this:

soundcloud menu



Next you click on Explore to get this:

soundcloud genres


Then you select Comedy to get this:

soundcloud comedy


Each of these “cards” represents a potential new Howard Stern show. You select one and start listening. If you find one you really like, you can follow in SoundCloud and get the next show right in your feed.

If you are driving to and from work and are looking for something good to listen to, I’d strongly recommend checking out some of these comedy shows on SoundCloud. They are great.

by Fred Wilson at April 18, 2014 12:45 PM


Are purely functional data structures always lock-free?

I've seen this claimed in several places, including on SO:, I get the point that locks are not needed to modify the data but you end up with multiple versions of it after concurrent modifications. That doesn't seem very useful in practice. I've tried to describe this with a simple scenarioa below:

Let's say we have 2 threads A and B. They are both modifying a purely functional dictionary D. They don't need locks at this point because the data is immutable so they output new dictionaries DA and DB. Now my question is how do you reconcile DA and DB so that later operations can see a single view of the data?

EDIT: The answers are suggesting to use a merge function over DA and DB. I don't see how that solves the problem since there could be another thread C which runs concurrently with the merge operation. The claim is that purely functional data structures are lock-free but using a merge function sounds more like eventual consistency which is a different thing.

by Daniel Velkov at April 18, 2014 12:44 PM

Planet Theory

Lecture 12 -- Privacy Yields an Anti-Folk Theorem in Repeated Games

Last week, Kobbi Nissim gave us an excellent guest lecture on differential privacy and machine learning. The semester has gone by fast -- this week is our last lecture in the privacy and mechanism design class. (But stop by next week to hear the students present their research projects!)

Today we'll talk about infinitely repeated games. In an infinitely repeated game, n players repeatedly, in an infinite number of stages, play actions and obtain payoffs based on some commonly known stage game. Since the game is infinitely repeated, in order to make sense of players total payoff, we employ a discount factor delta that specifies how much less valuable a dollar is tomorrow compared to a dollar today. (delta is some number in [0, 1) ). In games of perfect monitoring, players perfectly observe what actions each of their opponents have played in past rounds, but in large n player games, it is much more natural to think about games of imperfect monitoring, in which agents see only some noisy signal of what their opponents have played.

For example, one natural signal players might observe in an anonymous game is a noisy histogram estimating what fraction of the population has played each type of action. (This is the kind of signal you might get if you see a random subsample of what people play -- for example, you have an estimate of how many people drove on each road on the way to work today by looking at traffic reports). Alternately, there may be some low dimensional signal (like the market price of some good) that everyone observes that is computed as a randomized function of everyone's actions today (e.g. how much of the good each person produced).

A common theme in repeated games of all sorts are folk theorems. Informally, these theorems state that in repeated games, we should expect a huge multiplicity of equilibria, well beyond the equilibria we would see in the corresponding one-shot stage game. This is because players observe each other's past behavior, and so can threaten each other to behave in prescribed ways or else face punishment. Whether or not a folk theorem is a positive result or a negative result depends on whether you want to design behavior, or predict behavior. If you are a mechanism designer, a folk theorem might be good news -- you can try and encourage equilibrium behavior that has higher welfare than any equilibrium of the stage game. However, if you want to predict behavior, it is bad news -- there are now generically a huge multiplicity of very different equilibria, and some of them have much worse welfare than any equilibrium of the stage game.

In this lecture (following a paper joint with Mallesh Pai and Jon Ullman) we argue that:

  1. In large games, many natural signaling structures produce signal distributions that are differentially private in the actions of the players, where the privacy parameters tends to 0 as the size of the game gets large, and
  2. In any such game, for any discount factor delta, as the size of the game gets large, the set of equilibria of the repeated game collapse to the set of equilibria of the stage game. In other words, there are no "folk theorem equilibria" -- only the equilibria that already existed in the one shot game. 
This could be interpreted in a couple of ways. On the one hand, this means that in large games, it might be harder to sustain cooperation (which is a negative result). On the other hand, since it shrinks the set of equilibria, it means that adding noise to the signaling structure in a large game generically improves the price of anarchy over equilibria of the repeated game, which is a positive result. 

by Aaron Roth ( at April 18, 2014 12:41 PM


Control flow graphs - Tree decomposition

Control flow graphs

Considering above terminologies for drawing control flow graphs for any program, it is very simple. For example :

While A
if B
do ..
else do ..
end while

For above example, while doing decomposition, I can say its

D2 (D1)

i.e while and then inside while, its if-then-else.

Considering same situation. How can you represent

CONTINUE and BREAK statements

Ever for FOR statement there is no defined terminology like for while and if then else. FOR statement falls under while.

My prof says in theory, there is nothing about Break and continue statement and I couldnt find anything online too.

For example :

# include <stdio.h>
int main(){
   float num,average,sum;
   int i,n;
   printf("Maximum no. of inputs\n");
       printf("Enter n%d: ",i);
       break;                     //for loop breaks if num<0.0
  return 0;

Control flow graph for this is as below: the nodes has line number written : (Sorry the image is side ways) enter image description here

I decomposed this as :


* P1 represents set of statements outside loops

I'm not sure if this is correct. My professors says to use something as D22 for break, like create a new term from above image.

My main question here is the decomposition. I Know that i drew the CFG correctly, but is the decomposition correct according to the first image?. The break state kind of creates a while as you can see in CFG, but i'm not sure if it has to be considered as while and I guess we cannot, as per my professor.

I am working on this and wanted to know, if anyone has come across something for Break and continue statements while decomposition of graphs, please let me know.


PS : Please, Let me know, if am unclear and if anymore info is needed. I can probably write down an example and upload the picture.

by TheUknown at April 18, 2014 12:33 PM


How to access and send message to ZeroMQ from Tornado handler?

How to access and send message to ZeroMQ from Tornado handler ? I have

    context = zmq.Context(1)
    # Socket facing clients
    frontend = context.socket(zmq.XREP)
    # Socket facing services
    backend = context.socket(zmq.XREQ)

    zmq.device(zmq.QUEUE, frontend, backend)
except Exception, e:
    print e
    print "bringing down zmq device"

as standalone program which communicate with others but how from handler to put on same queue, do I need to create context every time or no ?

by Damir at April 18, 2014 12:24 PM


Faithful functors vs forgetful functors: exact category-theoretic defs?

In category theory, a functor between two categories $C,D$ is a map $F$ that assigns to each object (resp. morphism) $x$ of $C$ a corresponding object (resp. morphism) $F(x)$ of $D$ by respecting the incidence relations.

For each pair of objects $x,y$ of $C$, we may then define a map $F_{x,y}$ that takes any morphism $m : x \rightarrow y$ to a morphism $F(m) : F(x) \rightarrow F(y)$. I understand that $F$ is called faithful if every such mapping $F_{x,y}$ is injective, which means intuitively that the relational structure of the category is preserved, although the objects may not be.

There is a related notion of forgetful functor for which I couldn't find a precise definition, so is there anyone willing to help? I mean, is it just the opposite of faithful or is it the combination of unfaithfulness with some other implicit property?

by Super8 at April 18, 2014 12:22 PM


TV-Empfehlung: Der Beckmann über die digitale Welt. ...

TV-Empfehlung: Der Beckmann über die digitale Welt. Ist natürlich am Ende des Tages immer noch eine TV-Talkshow, aber für Talkshow-Verhältnisse durchaus empfehlenswert. Für mich überraschend wirkte keiner der Gäste uninformiert oder hat bloß Phrasen abgesondert.

Update: Ein paar Gedanken zu der Sendung. Gabriel hat für die SPD mehr Rückgrat gezeigt als ich mich je gesehen zu haben erinnern kann. Nicht nur hat er Google mit der Zerschlagung gedroht, er hat auch das Gesprächsangebot von Eric Schmidt öffentlich gemacht, anstatt es einfach anzunehmen, wie sonst üblich. Das empfand ich als massiven Stinkefinger in Richtung Google. Mein Eindruck ist, dass der Gabriel sich jetzt anhand von Google als Internet-Freiheits-Datenschutz-Politiker aufbauen möchte.

Mein anderer Gedanke ist, dass der Alibi-Internet-Unternehmer erstaunlich eloquent und als helles Köpfchen rüberkam. Das ist mal ein deutlicher Unterschied zum üblichen Talkshow-Einerlei, wo man da einen stammelnden "Betroffenen" hat, der nur da sitzt, um den Leuten einen Betroffenen zeigen zu können. Mir fiel auf, dass der darauf hinwies, dass er ja stark zwischen geschäftlich und privat trennt. Das ist ein Wink mit dem Zaunpfahl, dessen Signifikanz anscheinend in der Runde keiner so direkt aufgefangen hat. Es heißt, dass der Mann selber damit unzufrieden ist, dass er Daten sammeln muss. Er weiß, dass die Kunden das nicht wollen, dass das unmoralisch ist, und dass er da was verwerfliches tut. Er tut es, weil er glaubt, sonst nicht konkurrenzfähig zu sein. Die wichtige Lektion dabei ist: Von ihm und seinen Kollegen ist keine Gegenwehr zu erwarten bei Regulierungsversuchen. Nur so Token-Gegenwehr, um die Investoren zu beruhigen. Eigentlich hätte keiner von denen ein Problem damit, mit dem Profiling aufzuhören, wenn das ab morgen verboten wäre. Die einzigen, die da echt gegen opponieren würden, sind Unternehmen, die sich als supranational sehen, wie Google. Unternehmen, die mit dem Profiling andere, etablierte Unternehmen gerade aus dem Markt schmeißen.

Google hätte aus meiner Sicht in ein paar Jahren auch kein Problem mehr, auf das Profiling zu verzichten, wenn sie den Markt von Werbung, Tracking und Versicherungen komplett übernommen haben. Die Märkte, in denen ihre Profiling-Kompetenz ihnen Vorteile verschafft. Aber bis dahin brauchen sie das noch.

Update: Einen Gedanken noch. Juli Zeh spricht in der Sendung an, dass wir in Europa bald die Wahl haben zwischen Martin Schulz und Jean-Claude Juncker, und dass bitte alle entsprechend ihr Kreuzchen machen sollten. Ich hätte nicht gedacht, dass ich jemals nochmal etwas Positives über Sozialdemokraten sagen würde, aber Martin Schulz ist so dermaßen offensichtlich die bessere Wahl, dass ich an der Stelle empfehlen würde, auch im Bekanntenkreis ein bisschen Druck auszuüben. Wir sind alle aufgerufen, die Konservativen zu marginalisieren. Bei denen ist klar, dass wir eine neue Vorratsdatenspeicherung kriegen.

April 18, 2014 12:01 PM


Functional dependencies with the same key?

let's consider a table with

carID | hireDate | manufactory | model | custID | custName | outletNo | outletLoc

I want to evaluate all the functional dependencies to bring in first, second and then third normal form.

  • Functional dependencies

    carID,hireDate -> custID
  • Partial dependencies

    carID->manufactory, model, outletNo**
  • Transitive dependencies


Since a car is in a outlet only I have in the partial dependecies this:

carID->manufactory, model, outletNo**

However this leads to anomalies in insertion (imagine adding a car with no outlet), so should not that be like this?

carID->manufactory, model

But isn't this still an normalisation anomaly?

by graphtheory92 at April 18, 2014 11:55 AM


Scheme - Help Writing A Funcion

I'm trying to write a function in Scheme that takes two strings as input and returns a list of all optimal pairs of strings. In order to do this, I know that I need to make use of the following functions that I have already written. The functions already written will obviously need to be used as helper functions for the function that I'm trying to write.

1. alignment-score-tail

This function takes two strings, and scores each character according to a scoring criteria and accumulates the result. If two characters are equal, a score of 2 is obtained, if two characters are not equal, a score of -1 is obtained, and finally, if one character is an underscore, and the other character something elses, a score of -2 is obtained.

Here is example input/output:

> (alignment-score-tail "x" "x")

> (alignment-score-tail "x" "y")

> (alignment-score-tail "x" "_")

> (alignment-score-tail "Hello" "__low")

2. change-string-pairs

This function takes two chars (a and b, say) and a list of pairs of strings as input, and returns a modified list of pairs of strings: for each string pair in the list.

Here is example input/output:

> (change-string-pairs "a" "b" '(("one" "two")("three" "four")("five" "six")))
(("aone" "btwo") ("athree" "bfour") ("afive" "bsix"))

3. get-best-pairs

This function takes both a scoring function (scoring function in this case will be alignment-score-tail, which is described above) and a list of pairs of strings as input and then returns a modified list of pairs of strings. The returned list will contain all the optimal string pairs from the input, scored according to the input function.

Here is example input/output:

> (get-best-pairs alignment-score-tail '(("hello" "b_low")("hello_" "b_l_ow")("hello" "_blow")("hello" "blow")("h_e_llo" "bl_o__w")))
(("hello" "b_low") ("hello_" "b_l_ow") ("hello" "_blow"))

Having all these functions described above that I have already written, the function that I'm trying to write using those functions will have the following:


> (get-all-best-pairs "hello" "blow")
(("hello" "b_low") 
 ("hello_" "b_l_ow") 
 ("hello_" "b__low") 
 ("hello" "_blow") 
 ("hello_" "_bl_ow") 
 ("hello_" "_b_low") 
 ("hello_" "__blow"))

Would be really great, if I can see how this can be done. Also, in my functions that I have written above, I have made use of some built in Scheme functions like map, filter, append and apply.

I also am aware, that the algorithm is extremely inefficient, and is of exponential complexity. That is not of a concern to me at this time.

by Curiosity at April 18, 2014 11:35 AM

Scala deep type checking

My purpose is to do deep type checking. To check all type arguments. For example, here I am using TypeTag:

import scala.reflect.runtime.universe._
def check[T](a: T)(implicit tag: TypeTag[T]) = tag.tpe <:< typeTag[(String, List[Int])].tpe

And it seems to work fine:

scala> check(("", List("")))
res0: Boolean = false

scala> check(("", List(1)))
res1: Boolean = true

But I am not sure if it is the right way.

Also I think the implementation of equals in TypeTag seems strange:

 override def equals(x: Any) = x.isInstanceOf[TypeTag[_]] && this.mirror == x.asInstanceOf[TypeTag[_]].mirror && this.tpe == x.asInstanceOf[TypeTag[_]].tpe

this.tpe == x.asInstanceOf[TypeTag[_]].tpe always returns false if I am using it in my check function.

Also I am interested in common solution to make different type checking with one function. But I have no idea how to do that.

by Lemon Tree at April 18, 2014 11:21 AM

Planet Clojure

Optimal left to right pattern matching Automata in literate clojure

Optimal left to right pattern matching Automata
Hi, I am in the process of rewriting expresso's rule engine. Currently, rules in expresso can be very expressive, but the rule engine was written in the first two weeks of my gsoc project time on top of core.logic, so there are many efficiency issues with it. I did some research on how the rule engines from other term rewriting engines are build (like maude, elan, stragego ...). Following a few references, I came to this paper, which presents an algorithm to match an expression to multiple simultaneous patterns without backtracking in one pass over the expression, which is really cool and the perfect basis for a good rule engine/compiler. I implemented the algorithm in the paper and also build a rule compiler, that unrolls the automaton that is constructed into an efficient clojure function. It is a literate program, you can find it here. The rest of this post is the created html export from the literate org file.

1 Introduction

This is a clojure implementation of the algorithm from this paper. The problem that this addresses is matching a term against multiple patterns simultaneously, without backtracking scanning the expression only one time from left to right. The patterns are assumed to be linear, that is that there are no two variables with the same name in a pattern. Testing for equality has to be done as an additional check when the linear pattern matched.
To accomplish this, a directed acyclic graph representing a deterministic automaton is created from the pattern, with transitions labeled by the next symbol read during a left to right scan through the pattern and the currently matching patterns as nodes. The dag is kept minimal, that is there are no two states in the dag that produce the same matching sub-tree.
I extended the algorithm in the paper to also work when there is a wildcard on a function symbol like in the following pattern: '(? a b) and also to handle functions with multiple arities. This adds a few extra cases to the interpreter and the compiler, but in the case it isn't needed doesn't slow down the matching process.
Interpreting it works as expected - scan through the input expression, for each symbol follow the labeled transition if it exists - pick the default route if one exists in case that fails - fail otherwise - repeat until at failure state or the end of the expression is reached
The dag can also be compiled to an optimized clojure function resembling the decision tree that the dag represents. Basically, the function consists of a bunch of (case <location-in-expression> <symbol1> <forward-location-and-check-next-input> …. <default> <go-through-default-route-if-possible>) thus eliminating the need to search through the dag at matching time.

1.1 Implementation

(ns optimal-left-to-right-pattern-matching-automata.core
(:require [clojure.set :as set]
[clojure.walk :as walk]
[ :as zip]))
We need a (meta-) symbol for a default transition. It will be called omega from now on
(def omega '?)

1.1.1 Representing patterns

Because we are concerned with scanning expressions from left to right, the matching positions of the patterns can be totally ordered - by how right they appear in the printed representation - and put in a single list. Function symbols are represented as [<function-symbol> <number-of-arguments>], so that the flat representation retains all information about the original structure of the pattern. For example, the pattern '(f (g a b) a a b) can be represented as '([f 4] [g 2] a b a a b). In this representation, a pattern is just a list of transition labels that the automaton must perform in order to match an expression against the pattern. During the matching, there will always be a current state which is all patterns with the same matching prefix, a current position where the next symbol will be read, and a suffix to be matched for the next symbols read. This is the definition of a matching-item.
(defn matching-item
"A matching item is a triple r:a*b where ab is a term and r is a rule label.
The label identifies the origin of the term ab and hence, in a term rewriting system, the rewrite rule which has to be applied when ab is matched * is called
the matching dot, a the prefix and b the suffix. The first symbol of b is the matching symbol. The position of the matching dot is the matching position"
[r a b]
[r a b])

(defn matching-symbol [matching-item]
(let [[r a b] matching-item]
(first b)))

(def infinity (Double/MAX_VALUE))

(defn final? [matching-item]
(let [[r a b] matching-item]
(empty? b)))

(defn matching-position [matching-item]
(if (final? matching-item)
(let [[r a b] matching-item]
(inc (count a)))))
(defn initial-matching-item [label pattern]
[label '() pattern])
The current state of the automaton is then a set of matching items which share the same prefix.
(defn matching-set? [matching-items]
(let [[r a b] (first matching-items)]
(every? #(let [[r2 a2 b2] %] (= r r2)) (rest matching-items))))

(defn initial-matching-set? [matching-items]
(and (matching-set? matching-items) (= '() (second (first matching-items)))))

(defn final-matching-set? [matching-items]
(and (matching-set? matching-items) (= '() (nth (first matching-items) 2))))

1.1.2 The Transition function for the automaton

After we know what the states and the transitions of the automaton will be, we can start looking at the definition for the transition function delta. For more explanation, see the paper itself. Basically, from the current state - the current-matching-set - it returns as next node the set of matching items which could be forwarded by the symbol s - that is what the accept function does. It also avoids backtracking by adding more states when there is an ambiguity in the form that one pattern has a default next transition and another has a transition that goes a level deeper with a function symbol. If the function symbol transition would be followed, it could be that it failed and one had to backtrack and go through the omega transition. Therefore, for each such situation a new pattern is added to the matching set which consists of the omega rule but with the omega replaced by the next function symbol and a number of omegas that match the functions arity. It is also important to do this closing over the current matching set at the very beginning to handle the case of a default omega pattern. The paper fails to mention that.
(defn forward-matching-position [matching-item]
(let [[r a b] matching-item]
[r (concat a [(first b)]) (rest b)]))

(defn functions [matching-set]
(into #{} (filter #(or (and (symbol? %) (not= omega %))
(and (sequential? %)
(symbol? (first %))
(number? (second %))))
(map matching-symbol matching-set))))

(defn arity [function-symbol]
(or (and (sequential? function-symbol) (second function-symbol)) 0))

(defn accept [matching-items s]
(map forward-matching-position
(filter #(= (matching-symbol %) s) matching-items)))

(defn close [matching-items]
(let [F (functions matching-items)]
(set/union matching-items
(for [matching-item matching-items
function-symbol F
:let [arityf (arity function-symbol)]
:when (and (= omega (matching-symbol matching-item)))]
(let [[r a b] matching-item]
[r a (concat [function-symbol] (repeat arityf omega)
(rest b))])))))

(defn delta [matching-items s]
(close (accept matching-items s)))

1.1.3 Creating the DAG

  1. Graph implementation
    Here is a very simple implementation of a functional graph data structure
    ;;quick and dirty functional graph implementation
    (def empty-graph {})

    (defn add-node [g n]
    (if (g n)
    (assoc g n {:next #{} :prev #{}})))

    (defn add-edge [g n1 n2 l]
    (-> g
    (add-node n1)
    (add-node n2)
    (update-in [n1 :next] conj [n2 l])
    (update-in [n2 :prev] conj [n1 l])))

    (defn remove-edge [g n1 n2 l]
    (-> g
    (add-node n1)
    (add-node n2)
    (update-in [n1 :next] disj [n2 l])
    (update-in [n2 :prev] disj [n1 l])))

    (defn remove-node [g n]
    (if-let [{:keys [next prev]} (g n)]
    #(dissoc % n)
    #(reduce (fn [g* [n* l*]] (remove-edge g* n* n l*)) % prev)
    #(reduce (fn [g* [n* l*]] (remove-edge g* n n* l*)) % next))
  2. Recognizing equivalent states
    To make the created automaton minimal, equivalent states have to be recognized during the construction phase. Two states are equivalent, if for each item in set1 there exists an equivalent item in set2. Two matching items are equivalent, if they have the same rule label and the same suffix.
    (defn equivalent-matching-items? [matching-item1 matching-item2]
    (let [[r1 a1 b1] matching-item1 [r2 a2 b2] matching-item2]
    (and (= r1 r2) (= b1 b2))))

    (defn extract-first-by
    "returns [extracted rest-of-collection] or false"
    [f coll]
    (loop [[c & cs] coll rest-coll []]
    (if c
    (if (f c)
    [c (concat rest-coll cs)]
    (recur cs (conj rest-coll c)))

    (defn equivalent-matching-sets? [matching-set1 matching-set2]
    (loop [[mit & mits] matching-set1 matching-set2 matching-set2]
    (if mit
    (if-let [[mit2 mits2] (extract-first-by #(equivalent-matching-items? mit %)
    (recur mits mits2)
    (empty? matching-set2))))
  3. Constructing the DAG
    For detailed description about this algorithm, see the paper. Basically, we start with the initial-matching-set and create new states for all possible transitions, add the nodes and the edges to the graph, or only the transition if there already exists an equivalent state in the graph. Then sort the newly created states according to their matching position, so that states with only a few already matched items are handled first. The creation ends when the list of states is traversed completely.
    (defn failure? [state]
    (or (= '() state) (nil? state)))

    (defn get-next-node [g n l]
    (some #(and (= (second %) l) (first %)) (get-in g [n :next])))

    (defn search-equivalent-node [graph node]
    (first (for [[n v] graph
    :when (equivalent-matching-sets? node n)]

    (defn insert-according-to-matching-position [nodes-to-visit new-matching-set]
    ;;nodes-to-visit has to be sorted according to matching-position
    ;;all matching positions in a matching set are the same
    (let [nmp (matching-position (first new-matching-set))]
    (loop [[n & ns :as nodes-left] nodes-to-visit new-nodes-to-visit []]
    (if n
    (if (<= (matching-position (first n)) nmp)
    (recur ns (conj new-nodes-to-visit n))
    (vec (concat new-nodes-to-visit [new-matching-set] nodes-left)))
    (conj nodes-to-visit new-matching-set)))))

    ;;problem hier? gibt nur ein omega jetzt mehrere
    (defn create-new-states [pos nodes-to-visit graph]
    (let [current-state (nth nodes-to-visit pos)
    F (functions current-state)]
    (loop [[s & ss] (concat F [omega]) nodes-to-visit nodes-to-visit graph graph]
    (if s
    ;;work to do
    (let [new-matching-set (delta current-state s)]
    ;;check if there is already an equivalent matching-set in the graph
    (if-let [eq-node (search-equivalent-node graph new-matching-set)]
    (recur ss nodes-to-visit (add-edge graph current-state eq-node s))
    (recur ss (insert-according-to-matching-position
    nodes-to-visit new-matching-set)
    (add-edge graph current-state new-matching-set s))))
    ;;all symbols consumpted, so return the new state
    [graph nodes-to-visit]))))

    (defn create-dag [initial-matching-set]
    (loop [graph empty-graph nodes-to-visit [initial-matching-set] pos 0]
    (if (= (count nodes-to-visit) pos)
    ;;all nodes visited, so return graph
    (remove-node graph '())
    (let [[new-graph new-nodes-to-visit]
    (create-new-states pos nodes-to-visit graph)]
    (recur new-graph new-nodes-to-visit (inc pos))))))

1.1.4 Interpreting the DAG

With the constucted minimal dag like described in the paper, we can leave it and now implement how to interpret that dag to match an expression against multiple paterns. To do this, we will traverse the expression from left to right using clojure zippers. We recursively check for the next transition, follow it and move the zipper forward accordingly and fail if there is no transition possible. If we go through a wildcard then we add the current value of the zipper location to the bindings ;;TODO may miss some bindings in rules created by close
(defn consume-next [g current-state symbol]
(let [next-state (get-next-node g current-state symbol)]
(if (failure? next-state)
;;there was no link, so go through omega link
[(get-next-node g current-state omega) [symbol]]
[next-state []])))

(defn consume-next-level-down [g current-state [symbol count]]
(let [next-state (get-next-node g current-state [symbol count])]
(if (failure? next-state)
;;there was no link, so go through omega link
[(get-next-node g current-state [omega count]) [symbol]]
[next-state []])))

(defn- next-without-down
(if (= :end (loc 1))
(zip/right loc)
(loop [p loc]
(if (zip/up p)
(or (zip/right (zip/up p)) (recur (zip/up p)))
[(zip/node p) :end])))))

(defn match-expression [g patterns expression]
(loop [loc (zip/seq-zip expression) node patterns bindings []]
(if (or (failure? node) (zip/end? loc))
[node bindings]
(if (zip/branch? loc)
;;ok try if head symbol matches
;;we are using preorder throughout matching
(let [children-count (dec (count (zip/children loc)))
head-loc (zip/next loc)
[next-node add-bindings]
(consume-next-level-down g node [(first head-loc) children-count])]
(if (failure? next-node)
;;head got no match so we have to stay at the original level and try
;;to match there for a value or omega
(let [[next-node add-bindings] (consume-next g node (first loc))]
(recur (next-without-down loc) next-node
(concat bindings add-bindings)))
;;head location got a match so we go on on this level
(recur (zip/next head-loc) next-node
(concat bindings add-bindings))))
;;we have no possibility to go down a level deeper so we can just
;;consume directly
(let [[next-node add-bindings] (consume-next g node (first loc))]
(recur (zip/next loc) next-node
(concat bindings add-bindings)))))))
  1. Testing
    Here are a few sample calls and tests:
    (use 'clojure.test)
    (let [initial-matching-set (close [(initial-matching-item 1 '([? 2] a b))
    (initial-matching-item 2 '([? 1] a))
    (initial-matching-item 3 '(?))])
    dag (create-dag initial-matching-set)]
    (is (= '[([3 (?) ()]) (1)] (match-expression dag initial-matching-set 1)))
    (is (= '[([3 ([? 1] a) ()] [2 ([? 1] a) ()]) (+)]
    (match-expression dag initial-matching-set '(+ a))))
    (is (= '[([3 ([? 2] a b) ()] [1 ([? 2] a b) ()]) (+)]
    (match-expression dag initial-matching-set '(+ a b))))
    (is (= '[([3 (?) ()]) ((+ a b c))]
    (match-expression dag initial-matching-set '(+ a b c)))))

1.1.5 Compiling the DAG to a fast clojure function

The expression matching can be taken a level further, to the point that the dag can be compiled to a fast clojure function. The resulting clojure function will look like this:
(fn [expression]
(let [loc (zip/seq-zip expression)]
;;now code for the single transitions
;;if there are possible transitions in the dag that lead one
;;level down - if now the next part is replaced by false
;;and the next branch of the or is taken
(and (zip/branch? ~'loc) ;;fail if we are not in a branch
(let [head-loc (zip/next loc)]
(case (first head-loc) ;;fast dispatch on the function symbol
<function-symbol> (and (check-if-argument-count-matches)
#_ (...)
;;default case
<code-for-wildcard-transition or nil if no wildcard>)))
;;if there is no matching transition for the current head symbol
;;try matching the whole current subtree
(case (first loc)
<variable-or-constant> <code-for-next-transition>
#_ (...)
<code-for-wildcard-transition or nil if there is no wildcard>))))
In the end-nodes of the decision tree the code returns either nil for a failure node or sorts the applicable rules by priority (currently only their label but one could introduce the rule that more specific rules come first) and for each defines the bindings, checks the conditions and returns their result.
Therefor, we now extend the notion of a pattern to the notion of a rule. Currently this is really low level and the rule engine on top if this should take a more human readable form.
A rule has the form [<label> <pattern> <conditions> <results> <wildcard-positions>] label and pattern are the same as before, conditions is just a list of expressions to evaluate after succesful match, result is the rhs of the rule and wildcard-positions maps the wildcards in the pattern to the positions in the expression.
With this the compile-rules function can be defined
(defn get-in-expression [expression key]
(loop [loc (zip/seq-zip expression) [k & ks] key]
(if k
(let [new-loc (loop [k k loc (zip/down loc)]
(if (> k 0)
(recur (dec k) (zip/right loc))
(recur new-loc ks))
(first loc))))

(defn compile-step [g current-state rule-map]
(let [possible-moves (doall (map last (:next (get g current-state))))
head-moves (doall (filter sequential? possible-moves))
current-level-moves (doall (remove sequential? possible-moves))]
(if (empty? possible-moves)
`(and (zip/end? ~'loc)
;;current-state was successfully matched. Now get the results for the
;;matched rules in current-stater
~@(for [[label & rest] (sort-by first (filter final? current-state))
:let [[conditions result omga-positions]
(get rule-map label)]]
`(let [~@(for [[name pos] omga-positions
[name `(get-in-expression ~'expression ~pos)]]
(and ~@(concat conditions [result]))))))
`(or ~(if (empty? head-moves)
;;have to test going a level deeper
`(and (zip/branch? ~'loc)
(let [~'head-loc (zip/next ~'loc)]
(case (first ~'head-loc)
;;now all next steps have to be written down in a
;;case - the right hand side will be a recursive
;;call to create the code at the next level
;;the default of case is either nil or the level
;;from following a [? <number>] label in the graph
(for [[s c] head-moves :when (not= omega s)
[s `(and
(= (dec (count (zip/children ~'loc))) ~c)
(let [~'loc (zip/next ~'head-loc)]
g current-state [s c])
[(let [omega-downs (filter #(= (first %) omega)
`(case (dec (count (zip/children ~'loc)))
(for [[omga c] omega-downs
`(let [~'loc (zip/next ~'head-loc)]
g current-state[omega c])
;;no further defaulting possible - fail
(case (first ~'loc)
(for [symbol current-level-moves :when (not= omega symbol)
[symbol `(let [~'loc (next-without-down ~'loc)]
(get-next-node g current-state symbol)
[(if (some #{omega} current-level-moves)
;;we have a default case to fall back to
`(let [~'loc (next-without-down ~'loc)]
(get-next-node g current-state omega)

(defn compile-rules [rules]
(let [res
(for [[label pattern conditions result omga-positions] rules]
[(initial-matching-item label pattern) [label [conditions result
initial-matching-set (close (map first res))
rule-map (into {} (map second res))
dag (create-dag initial-matching-set)]
`(fn [~'expression]
(let [~'loc (zip/seq-zip ~'expression)]
~(compile-step dag initial-matching-set rule-map)))))
  1. Tests with example rules
    Here are two example rules: (f a a ?a a) => ?a (f (g a ?b) a ?b a) => ?b Encoded in the current low-level representation they become
    [[1 '([f 4] a a ? a) [] '?a '{?a [3]}]
    [2 '([f 4] [g 2] a ? a ? a) '[(= ?a ?b)] '?b '{?b [1 2] ?a [3]}]]
    Here are the corresponding tests:
    (let [rules
    [[1 '([f 4] a a ? a) [] '?a '{?a [3]}]
    [2 '([f 4] [g 2] a ? a ? a) '[(= ?a ?b)] '?b '{?b [1 2] ?a [3]}]]
    f (eval (compile-rules rules))]
    (is (= 'c (f '(f (g a c) a c a))))
    (is (not (f '(f (g a b) a c a))))
    (is (= 'a (f '(f a a a a))))
    (is (not (f '(f a a a b)))))
  2. Example code
    The compiled code for the two rules above looks like this:
    [loc ( expression)]
    ( loc)
    [head-loc ( loc)]
    (clojure.core/first head-loc)
    (clojure.core/count ( loc)))
    [loc ( head-loc)]
    ( loc)
    [head-loc ( loc)]
    (clojure.core/first head-loc)
    (clojure.core/count ( loc)))
    [loc ( head-loc)]
    (clojure.core/first loc)
    (clojure.core/first loc)
    (clojure.core/first loc)
    (clojure.core/first loc)
    (clojure.core/first loc)
    ( loc)
    [1 2])
    (= ?a ?b)
    [?a ?b])))))
    (clojure.core/count ( loc)))
    (clojure.core/first loc)
    (clojure.core/first loc)
    (clojure.core/first loc)
    (clojure.core/first loc)
    ( loc)
    (clojure.core/and ?a)))))
    (clojure.core/count ( loc)))
    (clojure.core/case (clojure.core/first loc) nil))))
Author: Maik Schünemann
Created: 2014-04-18 Fri 12:51
Emacs 24.2.1 (Org mode 8.2.5g)

by Maik Schünemann ( at April 18, 2014 11:07 AM


PTAS for Multidimensional Knapsack for unfixed dimension

There are several PTAS for Multidimensional Knapsack with running time O(n^{d/e}), where d is the dimension and 0 < e < 1. This are only PTAS when d is assumed to be fixed.

Are there PTAS for Multidimensional Knapsack for unfixed dimension?

by Thomas at April 18, 2014 11:01 AM


Studying Skiena. War Story: What’s Past is Prolog

I am reading The Algorithm Design Manual, 2nd Edition. I am the "What’s Past is Prolog" war story. This war story is available on the web here

I do not follow this statement:

Since the rules were ordered, each node in the subtree must represent the root of a run of consecutive rules, so there were only ${{n}\choose{2}}$ possible nodes to choose from for this tree...

and this one neither:

The rules in each run must be consecutive, so there are only ${{n}\choose{2}}$ possible runs to worry about.

My questions: How does the fact the rules are consecutive leads to the inference about the number of nodes or runs (which is ${{n}\choose{2}}$)?

Seems I'm missing some intermediate reasoning steps.

by Nik at April 18, 2014 10:49 AM


Conflicting cross-version suffixes in: org.scalamacros:quasiquotes

I am trying to use scala-pickling in one of my projects. I tried to mimic the build file of macroid which seems to use pickling too but I keep getting this error on sbt test:

[error] Modules were resolved with conflicting cross-version suffixes in dijon:
[error]    org.scalamacros:quasiquotes _2.10, _2.10.3
java.lang.RuntimeException: Conflicting cross-version suffixes in: org.scalamacros:quasiquotes
    at scala.sys.package$.error(package.scala:27)
    at sbt.ConflictWarning$.processCrossVersioned(ConflictWarning.scala:47)
    at sbt.ConflictWarning$.apply(ConflictWarning.scala:30)
    at sbt.Classpaths$$anonfun$61.apply(Defaults.scala:1044)
    at sbt.Classpaths$$anonfun$61.apply(Defaults.scala:1044)

Full build log is here. What am I doing wrong? What should I change in my build.sbt to fix this? I also should be able to cross compile and release my library against both 2.10.x and 2.11.x.

by wrick at April 18, 2014 10:30 AM

Spark runs out of memory when grouping by key

I am attempting to perform a simple transformation of common crawl data using Spark host on an EC2 using this guide, my code looks like this:

package ccminer

import org.apache.spark.SparkContext
import org.apache.spark.SparkContext._

object ccminer {
  val ENGLISH = "english|en|eng"
  val SPANISH = "es|esp|spa|spanish|espanol"
  val TURKISH = "turkish|tr|tur|turc"
  val GREEK = "greek|el|ell"
  val ITALIAN = "italian|it|ita|italien"
  val ALL = (ENGLISH :: SPANISH :: TURKISH :: GREEK :: ITALIAN :: Nil).mkString("|")

  def langIndep(s : String) = s.toLowerCase().replaceAll(ALL,"*")

  def main(args : Array[String]) {
    if(args.length != 3) {
      System.err.println("Bad command line")
    val CLUSTER="spark://???"
    val sc = new SparkContext(CLUSTER, "Common Crawl Miner",
      System.getenv("SPARK_HOME"), Seq("/root/spark/ccminer/target/scala-2.10/cc-miner_2.10-1.0.jar"))
    val data = sc.sequenceFile[String,String](args(0)){
      case (k,v) => (langIndep(k), v)
      case (k, vs) => vs.size > 1

And I am running it with the command as follows:

sbt/sbt "run-main ccminer.ccminer s3n://aws-publicdatasets/common-crawl/parse-output/segment/1341690165636/textData-* s3n://parallelcorpus/out/ 2000"

But very quickly it fails with errors as follows

java.lang.OutOfMemoryError: Java heap space
at com.ning.compress.BufferRecycler.allocEncodingBuffer(
at com.ning.compress.lzf.ChunkEncoder.<init>(
at com.ning.compress.lzf.impl.UnsafeChunkEncoder.<init>(
at com.ning.compress.lzf.impl.UnsafeChunkEncoderLE.<init>(
at com.ning.compress.lzf.impl.UnsafeChunkEncoders.createEncoder(
at com.ning.compress.lzf.util.ChunkEncoderFactory.optimalInstance(
at com.ning.compress.lzf.LZFOutputStream.<init>(
at org.apache.spark.scheduler.ShuffleMapTask$$anonfun$runTask$1.apply(ShuffleMapTask.scala:164)
at org.apache.spark.scheduler.ShuffleMapTask$$anonfun$runTask$1.apply(ShuffleMapTask.scala:161)
at scala.collection.Iterator$class.foreach(Iterator.scala:727)
at scala.collection.AbstractIterator.foreach(Iterator.scala:1157)
at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:161)
at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:102)
at org.apache.spark.executor.Executor$TaskRunner$$anonfun$run$1.apply$mcV$sp(Executor.scala:213)
at org.apache.spark.deploy.SparkHadoopUtil.runAsUser(SparkHadoopUtil.scala:49)
at org.apache.spark.executor.Executor$
at java.util.concurrent.ThreadPoolExecutor.runWorker(
at java.util.concurrent.ThreadPoolExecutor$

So my basic question is, what is necessary to write a Spark task that can group by key with an almost unlimited amount of input without running out of memory?

by John McCrae at April 18, 2014 10:27 AM

Portland Pattern Repository


'Working on Distributed Systems(Distributed Computing/Parallel Computing)" [on hold]

How should I start afresh in 'Working on Distributed Systems(Distributed Computing/Parallel Computing)"??? Actually, I want to be more specific in this.

I want to code and build some change in the nowadays method of working of parallel/distributed computing approach. I want to realise on my own as to how to create effects using code on the real time distributed servers,like playing through code with BitTorrent clients,Google servers! I do have a decent pick in Java Network Programming as well as a good knowledge of C. Please suggest me as to how should I proceed or what everything else should I learn to really gamble with the latest distributed technologies. I am going to publish a research paper on local level,on "Horizontal Scaling VS Vertical Scaling",but,still I think that all this is too theoretical a bit.

Kindly help me through some intentions on Java Network Programming.

by shekhar at April 18, 2014 09:47 AM


Portfolio optimization with Portfolio CVaR Constraint

I wanted to optimize a portfolio based on a portfolio-wide CVaR constraint (i.e. $CVaR_p \geq 0.08$). Unfortunately, I only find solution that minimizes the entire CVaR of the Portfolio. Exhibit Gordon 2007

Do you mind telling me if I need to add a restriction or how to change the utility function?

Here, you find the inital data and optimization Link


@John: I am so sorry but I still don't get it. The following linear optimization does not return a good solution


-0.0003    0.0006   -0.0019  \\returns of the different assets
-1.0000         0         0
     0   -1.0000         0
     0         0   -1.0000
1.0000         0         0
     0    1.0000         0
     0         0    1.0000
-0.1404   -0.1361   -0.2039  \\CVaR of the different assets

-0.0006 \\min expected return
-0.0800 \\CVaR constraint of the portfolio

Aeq = 1 1 1, beq = 1

When I replace it with a quadratic solver, it still don't hold the $CVaR_p$ constraint:

cvars = [-0.1404   -0.1361   -0.2039]

quadprog(sigma, -cvars, A, b, Aeq, beq, [], [], [], opts)

Does Matlab offer an appropirate solver? Is the constraint also convex (I don't think so because $CVaR_i$ is additive in comparision to $VaR_i$.

I want to thank you in advance for your patient because I do not have a quick grasp (esp. in QuantFinance).


Paper Portfolio Optimization with Conditional Value-at-Risk Objective and Constraints

by Markus at April 18, 2014 09:44 AM


Scala DSL - Simple Math

I'm relatively new at Scala and I'm struggling with DSLs. Currently I'm trying to implement a simple Math DSL which could be used with some kind of natural language.

My Idea:

print(Calculate 4 plus 6)=> returns 10

print(Calculate 4 mins 2)=> returns 2 ... and so on

So far I have implemented two classes. The main class which serves just for calling the method and a calculation class. My Problem is a have no Idea how I could pass the first number to the calculation object, because It is not allowed to define parameters.

Could Anyone Help with an example or something?

by user3042626 at April 18, 2014 09:37 AM

What is best way to wrap blocking Try[T] in Future[T] in Scala?

Here is the problem, I have a library which has a blocking method return Try[T]. But since it's a blocking one, I would like to make it non-blocking using Future[T]. In the future block, I also would like to compute something that's depend on the origin blocking method's return value.

But if I use something like below, then my nonBlocking will return Future[Try[T]] which is less convince since Future[T] could represent Failure[U] already, I would rather prefer propagate the exception to Future[T] is self.

def blockMethod(x: Int): Try[Int] = Try { 
  // Some long operation to get an Int from network or IO
  throw new Exception("Network Exception") }

def nonBlocking(x: Int): Future[Try[Int]] = future {
  blockMethod(x).map(_ * 2)

Here is what I tried, I just use .get method in future {} block, but I'm not sure if this is the best way to do that.

def blockMethod(x: Int): Try[Int] = Try { 
  // Some long operation to get an Int from network or IO
  throw new Exception("Network Exception") }

def nonBlocking(x: Int): Future[Int] = future {
  blockMethod(x).get * 2

Is this correct way to do that? Or there is a more scala idiomatic way to convert t Try[T] to Future[T]?

by Brian Hsu at April 18, 2014 09:22 AM

lazy val v.s. val for recursive stream in Scala

I understand the basic of diff between val and lazy val . but while I run across this example, I 'm confused.

The following code is right one. It is a recursion on stream type lazy value.

def recursive(): {
     lazy val recurseValue: Stream[Int] = 1 #:: 

If I change lazy val to val. It reports error.

def recursive(): {
     //error forward reference failed.
     val recurseValue: Stream[Int] = 1 #:: 

My trace of thought in 2th example by substitution model/evaluation strategy is :

the right hand sight of #:: is call by name with that the value shall be of the form :

1 #:: ?,

and if 2th element being accessed afterward, it refer to current recurseValue value and rewriting it to :

1 :: ((1 #:: ?) map func) = 1 :: (func(1) #:: func(?))

.... and so on and so on such that the compiler should success.

I don't see any error when I rewriting it ,is there somthing wrong?

EDIT: CONCLUSION:I found it work fine if the val defined as a field. And I also noticed this post about implement of val. The conclusion is that the val has different implementation in method or field or REPL. That's confusing really.

by superChing at April 18, 2014 09:13 AM

Portland Pattern Repository


Hot to get int value of generic Java Enum in Scala

I have some Enums declared in java file:

public class Enums {    
public static enum Achievement {NORMAL, PROGRESSIVE}
public static enum Log {INFO,WARNING,ERROR}
public static enum Game {ONE_TIME, GRADUAL}}

Now in scala file let's assume i have:

val key: Log = Log.INFO
val typ: Class[_] = getType(key)

What I need is to do:

Enum.valueOf(typ, "INFO")

Unfortunatelly this approach leads to miscelanious type errors like

 > Error:(65, 29) type mismatch;
 found   : Class[?0] where type ?0
 required: Class[T]
      val nr = Enum.valueOf(result,

Do you have any ideas? Is there any way to create an enum or get its ordinal in this situation? getType uses reflection to find field's type in class and pattern match it to convert it.


Solved it by simply:

  val enum  = typ.getEnumConstants().find(_.toString.equals("INFO"))
  val ordinal = enum match {
    case Some(enum) => enum.asInstanceOf[Enum[_]].ordinal()
    case None =>

by boneash at April 18, 2014 08:56 AM

Unable to convert generic case class to json using "writes"

I have a class which I want to be able to convert to json:

case class Page[T](items: Seq[T], pageIndex: Int, pageSize: Int, totalCount: Long)

object Page {

  implicit val jsonWriter: Writes[Page[_]] = Json.writes[Page[_]]

The error is No apply function found matching unapply parameters

by Alex at April 18, 2014 08:53 AM


Are there non-constructive proofs of existence of "small" Turing machines / NFAs?

After reading a related question, about non-constructive existence proofs of algorithms, I was wondering if there are methods of showing existence of "small" (say, state-wise) computation machines without actually building it.


suppose we are given some language $L\subseteq \Sigma^*$ and fix some computation model (NFAs / turing machine/ etc.).

Are there any non-constructive existence results showing a $n$-state machine for $L$ exists, but without the ability of finding (in $poly(n,|\Sigma|)$ time) it?

For example, is there any regular language $L$ for which we can show $nsc(L)\leq n$ but we don't know how to build a $n$-state automaton for?

EDIT: after some discussion with Marzio (thanks !) I think I can better formulate the question as follows:

Is there a language $L$ and a computation model for which the following holds:

  1. We know how to build a machine that compute $L$ that has $m$ states.

  2. We have a proof that $n$-states machine for $L$ exists (where $n << m$), but either we can't find it at all or it'd take exponential time to compute it.

by R B at April 18, 2014 08:46 AM


ZPOOL replace defect device in exported pool [on hold]

Yesterday, I put a new disk into my server. Sadly, I didn't check the disk before of failures.

I added it to my pool with the command zpool add nas /dev/disk/by-id/scsi-SATA_ST31500341AS_9VS27Z4M-part1

Short after, the CPU-load of the server went nearly to infinity, I couldn't even relogin.

So I performed a hard-reboot (Alt + SysRq + b), but the server couldn't boot. (After the GRUB showed up, nothing more happened for about 5 minutes. Then, I shut it down and took out the new disk. I booted up and it worked.

But now, I have the problem, that I can't access the so-called "nas"-pool because the last (the new) disk shows as status "UNAVAIL", and because it's no mirrored pool, the whole pool is in state UNAVAIL.

If I put in the disk again and do a zpool online nas /dev/disk/by-id/scsi-SATA_ST31500341AS_9VS27Z4M-part1 it doesn't work and tells me "the disk could not be found".

So i tried some possibilities I read from the oracle-docs and I exported it with zpool export nas. Now, I'm not even able to import the pool.

zpool import nas -f
cannot import 'nas': one or more devices is currently unavailable

And if I look at zpool import, it tells me:

pool: nas
     id: 3366469163144781663
  state: UNAVAIL
 status: One or more devices are missing from the system.
 action: The pool cannot be imported. Attach the missing
        devices and try again.

        nas                                               UNAVAIL  missing device
          dm-name-linuxServer-nas                         ONLINE
          ata-WDC_WD20EARX-00PASB0_WD-WCAZAC521840-part1  ONLINE
          ata-WDC_WD20EFRX-68AX9N0_WD-WMC300228535-part1  ONLINE

        Additional devices are known to be part of this pool, though their
        exact configuration cannot be determined.

To be clear: the pool is completely irrelevant, the data is not. If I'd be able to access the data somehow, I could export it to a external HDD.

Yes, I have no backup (shame on me!) but I don't have usually another 6TB for the backup somewhere.

Is there any possibility to access this data? Maybe faking the disk so that zpool thinks, it's available or something like this?

Any help would greatly be appreciated.

by stueliueli at April 18, 2014 08:43 AM


Computer Science vs. Computer Engineering

If you'd like to get straight to the point, skip to the last line.

Hi everyone. I am currently a sophomore computer engineering student who is very interested in switching to computer science, as a lot of the topics I am interested in are not touched upon in my current track.

Only problem is that everyone I share this with strongly advises against switching, despite my argument that I'm not very happy where I'm at. Not only do my parents think I should stick with it, but the entire IT department where she works, and the academic advisor as well. I would normally disregard all of their opinions and switch regardless, but I am by no means financially independent.

The bottom line for me is that my mental health and grades are suffering because of this issue, but I apparently need more than that to earn their blessing.

Would anyone care to elaborate on the differences between these two fields?

submitted by sjr_
[link] [35 comments]

April 18, 2014 08:41 AM


Heroku database settings

I am trying to deploy a Scala Akka Spray application to Heroku.

The application.conf file:

akka {
  loglevel = DEBUG
  event-handlers = ["akka.event.slf4j.Slf4jEventHandler"]

service {
    host = "localhost"
    port = 8080

db {
    host = "localhost"
    port = 3306
    name = "mysql"
    user = "root2"
    password = "marta"

What should be the service and db settings on Heroku?

by user3294171 at April 18, 2014 08:15 AM

SBT: external config file with values accessible in build.sbt

I have an sbt project of standard structure. I'd like to have file, which I could separate from my build and to specify there values for use in Build.scala or build.sbt (to omit spoiling repository with local configurations).

It may be plain .properties format or scala file, or typesafe config, or any other (common .sbt practice is most welcomed of course):


Is there common practice for this so values are accessible in sbt build files? I want to pass them as test arguments to keep off of build files themselves.

by dmitry at April 18, 2014 08:03 AM

the error of package auto scanner, p.a.d.s.d.TableScanner$, how to fix it?

I use models.web and models.user instead of just use models.*, in playframework2.2. I can run it, but when I compile or run it, it always give me the error:

[info] play - database [default] connected at jdbc:postgresql://localhost:5432/w
[error] p.a.d.s.d.TableScanner$ - Could not find any classes or table queries fo
r: models.*

Anyone know how to fix this problem of models.*, how to make it scan correct package?

by user504909 at April 18, 2014 08:00 AM



Reverse of flatMap in scala

I'm trying to take a iterator of Strings, and turn it into an iterator of collections of strings based on an arbitrary splitting function.

So say I have

val splitter: String => Boolean = s => s.isEmpty

then I want it to take

val data = List("abc", "def", "", "ghi", "jkl", "mno", "", "pqr").iterator

and have

def f[A] (input: Iterator[A], splitFcn: A => Boolean): Iterator[X[A]]

where X can be any collection-like class you want, so long as it can be converted into a Seq, such that

f(data, splitter).foreach(println(_.toList))


    List("abc", "def")
    List("ghi", "jkl", "mno")

Is there a clean way to do this, that does not require collecting the results of the input iterator entirely into memory?

by Nathan Kronenfeld at April 18, 2014 07:29 AM


Liberator, monger and http-kit

Does any of you have experience on using these 3 together? I'm facing a problem where I cannot call any API of monger inside the body of defresource. It always spit out an error "can't serialize class org.httpkit.server.AsyncChannel". Any of you experience this?

submitted by washtafel
[link] [1 comment]

April 18, 2014 06:58 AM


What was the reasoning behind ClojureScript not needing Clojure's defstruct?

defstruct is not supported in ClojureScript - it would appear to be by design. Now it may be that this is effectively a deprecated part of the Clojure language, and the designers of ClojureScript were just hoping everyone had moved on. (But this is my speculation).

My question is: What was the reasoning behind ClojureScript not needing Clojure's defstruct?

by hawkeye at April 18, 2014 06:43 AM


Recognizing interval graphs--"equivalent intervals"

I was reading a paper for recognizing interval graphs. Here is an excerpt from the paper:

Each interval graph has a corresponding interval model in which two intervals overlap if and only if their corresponding vertices are adjacent. Such a representation is usually far from unique. To eliminate uninteresting variations of the endpoint orderings, we shall consider the following block structure of endpoints: Denote the right (resp. left) endpoint of an interval $u$ by $R(u)$ (resp. $L(u)$). In an interval model, define a maximal contiguous set of right (resp. left) endpoints as an R-block (resp. L-block). Thus, the endpoints can be grouped as a left-right block sequence. Since an endpoint block is a set, the endpoint orderings within a block are ignored. It is easy to see that the overlapping relationship does not change if one permute the endpoint order within each block. Define two interval models for $G$ to be equivalent if either their left-right block sequences are identical or one is the reversal of the other.

I am unable to understand the notion of equivalent intervals. Can someone help me?

by anonymous at April 18, 2014 06:40 AM


Black Scholes vs Binomial Model

I'm trying to confirm my understanding of the 2 models. It is my understanding that the black-scholes is a special case of a binomial model with infinite steps.

Does this mean that if I were to start with a Binomial model with 1 step and increase steps towards infinity I would approach the same value concluded by the black-scholes?

If so does this mean I could use the implied volatility from Black-scholes formula derived from the market price of an option with the rest of the values (r, t, K, S, σ(IV) ) and approach the same market price from the black-scholes as # of steps approaches infinity? Would this only be the case for a European call with more disagreement on the value of American options with early exercise?


by Andromeda at April 18, 2014 06:38 AM


Integration of liberator and monger

I want to create a handler using liberator's defresource that serves data from mongodb using monger.

(ns myns
  (:require [monger [collection :as mc]]
            [ [json :as json]]
            [liberator.core :refer [defresource]])
  (:import org.bson.types.ObjectId))

(defresource foo-handler [id]
  :available-media-types ["text/html" "application/json"]
  :handle-ok (json/write-str (mc/find-map-by-id "foo" (ObjectId. id)))

It simply gives me http 500 status code, and a "can't serialize class org.httpkit.server.AsyncChannel". If I remove (mc/find-map-by-id "foo" (ObjectId. id)) and change it to a simple map, it works.

Is there any workaround? Btw i'm using http-kit.

by Faris Nasution at April 18, 2014 05:51 AM


Consequences of nondeterminism speeding up deterministic computation

If $\mathsf{NP}$ contains a class of superpolynomial time problems, i.e.

for some function $t \in n^{\omega(1)}$, $\mathsf{DTIME}(t) \subseteq \mathsf{NP}$,

then if follows from the deterministic time hierarchy theorem that $\mathsf{P} \subsetneq \mathsf{NP}$.

But are there any other interesting consequences nontrivial (i.e. not a consequence of $\mathsf{P} \subsetneq \mathsf{NP}$) if nondeterminism can speed up deterministic computations?

by GMB at April 18, 2014 05:14 AM



concatenating sequences in 4clojure funtion

I just finished 4clojure's problem 60, here's the code for my first program with the problem description:

;;Write a function which behaves like reduce, but returns each intermediate
;;value of the reduction. Your function must accept either two or three arguments,
;;and the return sequence must be lazy.
(fn red
  ([fun sq]
     (red fun (first sq) (rest sq)))
  ([fun acum sq]
   (if (empty? sq) acum
     (cons acum (lazy-seq (red fun (fun acum (first sq)) (rest sq)))))))

The core of the function occurs one line bellow the if, I just return the initial value followed by applying it to the next element in the sequence. But it fails for the second test case involving vectors:

user=> (red conj [1] [2 3 4])
([1] [1 2] [1 2 3] 1 2 3 4);; it should be ([1] [1 2] [1 2 3] [1 2 3 4])

It took me some time to realize that the problem was the cons which just adds the vector [1 2 3 4] as it were the rest of the list instead as a single element.

What I did is to convert cons to concat and acum to [acum] and it worked:

(fn red
  ([fun sq]
     (red fun (first sq) (rest sq)))
  ([fun acum sq]
   (if (empty? sq) [acum]
     (concat [acum] (lazy-seq
                     (red fun (fun acum (first sq)) (rest sq)))))))

Don't ask me why but it seems to me kind of inelegant, the other solutions didn't use concat neither.

The question is, considering the first function as it is, what function/macro does the work without modifying the code too much.

by loki at April 18, 2014 05:09 AM


Slightly paranoid question about hardware security.

Okay, short summary: I have for a while been smoking electronic cigarettes and quit cigarettes, and my liquid pen battery charger is a big dumb old thing with a giant electronics piece between the battery plug and USB male end. It should go without saying I plug it into a computer 9/10 times for charging (in lines with the question ahead, stupid I know), so, having a spare charger what software based examination and hardware based examination can I give this thing to see if there is a liveboot or self executable on this. Every time I plug it into my father's computer the damn thing instantly reboots. I don't like that at all without explanation.

TL;DR My computer reboots every time I plug in my ecigarette to charge, its a big dumb USB charger, how do I see on the software and hardware side if this thing is fucking with my computer.

submitted by Twisted_word
[link] [1 comment]

April 18, 2014 04:19 AM

Portland Pattern Repository


What is the difference between the reader monad and a partial function in Clojure?

Leonardo Borges has put together a fantastic presentation on Monads in Clojure. In it he describes the reader monad in Clojure using the following code:

;; Reader Monad

(def reader-m
  {:return (fn [a]
             (fn [_] a))
   :bind (fn [m k]
           (fn [r]
             ((k (m r)) r)))})

(defn ask  []  identity)
(defn asks [f]
  (fn [env]
    (f env)))

(defn connect-to-db []
  (do-m reader-m
        [db-uri (asks :db-uri)]
        (prn (format "Connected to db at %s" db-uri))))

(defn connect-to-api []
  (do-m reader-m
        [api-key (asks :api-key)
         env (ask)]
        (prn (format "Connected to api with key %s" api-key))))

(defn run-app []
  (do-m reader-m
        [_ (connect-to-db)
         _ (connect-to-api)]
        (prn "Done.")))

((run-app) {:db-uri "user:passwd@host/dbname" :api-key "AF167"})
;; "Connected to db at user:passwd@host/dbname"
;; "Connected to api with key AF167"
;; "Done."

The benefit of this is that you're reading values from the environment in a purely functional way.

But this approach looks very similar to the partial function in Clojure. Consider the following code:

user=> (def hundred-times (partial * 100))

user=> (hundred-times 5)

user=> (hundred-times 4 5 6)

My question is: What is the difference between the reader monad and a partial function in Clojure?

by hawkeye at April 18, 2014 04:00 AM



Implementing `elem` with foldLeft

I'm working on Learn You a Haskell. On the "fold" section, I need to implement elem (given an element, find out if it's in the list - True or False).

def myElem(a: Char, as: List[Char]): Boolean = as match {
   case Nil => false
   case x::Nil => println(x); if(x == a) true else false
   case x::_ => println(x); as.foldLeft(false){ (_, elem) => 
                   if(a == elem) true
                   else myElem(a, as.tail)


However, it's failing on a simple example:

scala> myElem('a', "ab".toList)
res8: Boolean = false

What am I doing wrong here? Also, as extra, I'd appreciate any suggestion on improvement of this code.

As an aside, I would think a find would be more appropriate here.

by Kevin Meredith at April 18, 2014 03:39 AM


How to install PostgreSQL 9.3 in FreeBSD jail?

I configured virtual NICS using pf, and a jail for FreeBSD using qjail create pgsql-jail

When I tried to install PostgreSQL 9.3 using port collection, it shows strange message at first.

pgsql-jail /usr/ports/databases/postgresql93-server >make install
===> Building/installing dialog4ports as it is required for the config dialog
===>  Cleaning for dialog4ports-0.1.5_1
===> Skipping 'config' as NO_DIALOG is defined
====> You must select one and only one option from the KRB5 single
*** [check-config] Error code 1

Stop in /basejail/usr/ports/ports-mgmt/dialog4ports.
*** [install] Error code 1

Stop in /basejail/usr/ports/ports-mgmt/dialog4ports.
===> Options unchanged
=> postgresql-9.3.0.tar.bz2 doesn't seem to exist in /var/ports/distfiles/postgresql.
=> Attempting to fetch
postgresql-9.3.0.tar.bz2                        1% of   16 MB   71 kBps

Anyway, installation continues, so I waited. I chose all default options for all option dialogs. And at the end of the process, I saw it finally failed with this message.

====> Compressing man pages
===>  Building package for pkgconf-0.9.3
Creating package /basejail/usr/ports/devel/pkgconf/pkgconf-0.9.3.tbz
Registering depends:.
Registering conflicts: pkg-config-*.
Creating bzip'd tar ball in '/basejail/usr/ports/devel/pkgconf/pkgconf-0.9.3.tbz'
tar: Failed to open '/basejail/usr/ports/devel/pkgconf/pkgconf-0.9.3.tbz'
pkg_create: make_dist: tar command failed with code 256
*** [do-package] Error code 1

Stop in /basejail/usr/ports/devel/pkgconf.
*** [build-depends] Error code 1

Stop in /basejail/usr/ports/textproc/libxml2.
*** [install] Error code 1

Stop in /basejail/usr/ports/textproc/libxml2.
*** [lib-depends] Error code 1

Stop in /basejail/usr/ports/databases/postgresql93-server.
*** [install] Error code 1

Stop in /basejail/usr/ports/databases/postgresql93-server.

I have no idea why this fails. Errors at beginning seems I have something wrong with dialog4ports. And errors at last seems installer cannot write to ports file tree. AFAIK, the ports files are read-only shared from host system.

What's wrong with my jail? How can install PostgreSQL 9.3 in my jail?

by Eonil at April 18, 2014 03:38 AM


Good videos to review for AP Comp Sci A?

Hi I'm reviewing for the AP test for AP Comp Sci A and I was wondering if anyone knows any good review videos I could watch to help me review for the test

submitted by unavailable123
[link] [2 comments]

April 18, 2014 03:11 AM


Why is my bubble sort taking longer to sort a random array as opposed to a descending array?

I am in an entry-level algorithms class, and for our final project we are coding and thoroughly analyzing 6 different sorting methods. Part of the analyzation is timing the methods and comparing the runtime results depending on the original order of the array (in order to more fully grasp the concept of constant costs, I suppose). I coded the bubble sort in Java, and when I run it on an array that is in descending order, it returns a sorted array FASTER than when I run it on an array of random ints, even though it is doing, on average, twice as many swaps. It seems to me that doing twice as many operations should result in taking much longer to finish. I have NO idea what could be causing this discrepancy, and any help would be appreciated.

by Eyeball McCool at April 18, 2014 03:09 AM


High school senior project ideas?

Hi Reddit, I'm currently a junior in high school. At my school, all the seniors have to complete senior projects. Creating a program of some sort for my senior project would be of interest to me.

I'm taking AP Comp Sci right now (I have a B). I can write programs in Java just fine, but I have no creative juices flowing from my brain. I have no idea what kind of program I should write. I want to write something that challenges me, but isn't impossible to complete. Any of you guys have ideas?

submitted by iForesee
[link] [5 comments]

April 18, 2014 03:06 AM

DragonFly BSD Digest

BSDNow 033: Certified Package Delivery

As you can guess from the title, this week’s BSDNow talks about building OpenBSD packages in bulk among other things, and also interviews Jim Brown of

by Justin Sherrill at April 18, 2014 03:02 AM


#1020; The Unknown Knowns

“It still works as a parable.”

“There must be some distinction between a parable and simply a lie”

by David Malki at April 18, 2014 03:00 AM

DragonFly BSD Digest

BSD Magazine for March

The March issue of BSD Magazine is out, and this month has an article written by Siju George about how his company is using DragonFly and Hammer for backups.

by Justin Sherrill at April 18, 2014 02:59 AM


installing zeromq under pypy

I have installed zeromq under CPython. How can I install it that it runns also under pypy?

The problem is it that zeromq needs Cython.

by Davoud Taghawi-Nejad at April 18, 2014 02:53 AM


To prove the recurrence by substitution method $T(n) = 7T(n/2) + n^2$

I have done the proof until the point when $T(n) \leq cn^{\log7}$.

But when it comes to finding the value of constant $c$, I am getting stuck.

The given recurrence relation is $T(n) = 7T(n/2) + n^2$.

Since we already calculated the solution above which is $cn^{\log 7}$.

Inductive step:

Now we have to prove that $T(n) \leq c n^{\log7}$ where $c$ is a positive constant. If we consider that the solution holds good for $n/2$ then we can prove that it works for $n$ also: $$T(n/2) \leq c(n/2)^{\log7}.$$ Substituting these values in the recurrence relation:

$$ \begin{align*} T(n) &\leq 7c/(2)^{\log7} \times (n)^{\log7} + n^2 \\ &\leq cn^{\log7}, \text{ since $7/(2)^{\log7}$ is constant so can be ignored and $cn^{\log7} \gg n^2$ for large $n$} \\ &\leq cn^{\log7} \text{ assuming $c$ is a constant $\geq 1$.} \end{align*} $$

Finally to find constant $c$,

$$(7/(2)^{\log7}) \times cn^{\log7} + n^2 \leq cn^{\log7}. $$

I am not able to find appropriate $c$ for which the condition holds true.

by user16666 at April 18, 2014 02:50 AM

Recursive Majority Gate Information Loss

According to Information Theory, the logic gates AND, NAND, OR, NOR all lose 1.189 bits of information each with two bits of information at their inputs and with all inputs being equiponderent. But when gates are used in combination with other gates in a circuit, to calculate the entropy loss for the entire circuit you have to use what are called "mixture probabilities" or "mixing distributions" to calculate the entropy loss, the loss for each gate being dependent on the entropy loss history of the preceeding gates. Can anyone calculate the entropy loss of a recursive majority gate (9 inputs all equally likely {1,0}? This is way beyond my mathematical skills, I was hoping someone on this site could be adventurous enough to try and calculate it.

by William Hird at April 18, 2014 02:42 AM


Why is this JeroMQ (ZeroMQ port) benchmark so slow?

I would like to use this library I found, it's a pure java port (not a wrapper) of zeromq. I am trying to test it and while it claims some good numbers, the test I am performing is giving rather poor results and it's even performed locally (client and serve on the same machine). I'm sure it's something I am doing wrong. It takes approx. 5 seconds to execute this 10.000 messages loop.

All I did is take the Hello world example and removed pause and sysouts. Here is the code:

The Server:

package guide;

import org.jeromq.ZMQ;

public class hwserver{
    public static void main(String[] args) throws Exception{

        //  Prepare our context and socket
        ZMQ.Context context = ZMQ.context(1);
        ZMQ.Socket socket = context.socket(ZMQ.REP);

        System.out.println("Binding hello world server");
        socket.bind ("tcp://*:5555");        

        while (true) {                  
            byte[] reply = socket.recv(0);
            String requestString = "Hello" ;
            byte[] request = requestString.getBytes();              
            socket.send(request, 0);            

The Client:

package guide;

import org.jeromq.ZMQ;

public class hwclient{
    public static void main(String[] args){
        ZMQ.Context context = ZMQ.context(1);
        ZMQ.Socket socket = context.socket(ZMQ.REQ);
        socket.connect ("tcp://localhost:5555");

        System.out.println("Connecting to hello world server");

        long start = System.currentTimeMillis();
        for(int request_nbr = 0; request_nbr != 10_000; request_nbr++) {
            String requestString = "Hello" ;
            byte[] request = requestString.getBytes();           
            socket.send(request, 0);
            byte[] reply = socket.recv(0);           
        long end = System.currentTimeMillis();

Is is possible to fix this code and get some decent numbers?

by Dan at April 18, 2014 02:40 AM


Complexity question from mathematical music theory

Fix an positive integer $N$.

A row means any linear ordering $R=(n_i)_{0\leq i <N}$ of the additive group ${\Bbb Z}/N{\Bbb Z}$.

Call $R$ a (generalized) all-interval row if the elements of the sequence $(n_i-n_{i-1})_{1\leq i <N}$ exhaust $({\Bbb Z}/N{\Bbb Z})\setminus \{0\}$. (This requires even $N$.)

I would like to know if the following decision problem is NP-complete:

Given $M<N$ and a sequence $S=(n_i)_{0\leq i <M}$ with $n_i\in {\Bbb Z}/N{\Bbb Z}$, does $S$ extend to an all interval row?

by David Feldman at April 18, 2014 02:40 AM



How can I evaluate "symbol" and "(symbol 1)" with the same name?

I want to get following results when I evaluate edit-url and (edit-url 1).

edit-url     --> "/articles/:id/edit"
(edit-url 1) --> "/articles/1/edit"

Is it possible to define such a Var or something?
Now, I use following function, but I don't want to write (edit-url) to get const string.

(defn edit-url ([] "/articles/:id/edit") ([id] (str "/articles/" id "/edit")))

Thanks in advance.

by snufkon at April 18, 2014 01:34 AM

arXiv Logic in Computer Science

Finite Groupoids, Finite Coverings and Symmetries in Finite Structures. (arXiv:1404.4599v1 [math.CO])

We propose a novel construction of finite hypergraphs and relational structures that is based on reduced products with Cayley graphs of groupoids. To this end we construct groupoids whose Cayley graphs have large girth not just in the usual sense, but with respect to a discounted distance measure that contracts arbitrarily long sequences of edges within the same sub-groupoid (coset) and only counts transitions between cosets. Reduced products with such groupoids are sufficiently generic to be applicable to various constructions that are specified in terms of local glueing operations and require global finite closure. We here examine hypergraph coverings and extension tasks that lift local symmetries to global automorphisms.

by <a href="">Martin Otto</a> at April 18, 2014 01:30 AM

Analyzing Android Browser Apps for file:// Vulnerabilities. (arXiv:1404.4553v1 [cs.CR])

Securing browsers in mobile devices is very challenging, because these browser apps usually provide browsing services to other apps in the same device. A malicious app installed in a device can potentially obtain sensitive information through a browser app. In this paper, we identify four types of attacks in Android, collectively known as FileCross, that exploits the vulnerable file:// to obtain user's private files, such as cookies, bookmarks, and browsing histories. Our study shows that this class of attacks is much more prevalent and damaging than previously thought. We design an automated system to dynamically test 115 browser apps collected from Google Play and find that 64 of them are vulnerable to the attacks. Among them are the popular Firefox, Baidu and Maxthon browsers, and the more application-specific ones, including UC Browser HD for tablet users, Wikipedia Browser, and Kids Safe Browser. A detailed analysis of these browsers further shows that 26 browsers (23%) expose their browsing interfaces unintentionally. In response to our reports, the developers concerned promptly patched their browsers by forbidding file:// access to private file zones, disabling JavaScript execution in file:// URLs, or even blocking external file:// URLs. We employ the same system to validate the nine patches received from the developers and find one still failing to block the vulnerability.

by <a href="">Daoyuan Wu</a>, <a href="">Rocky K. C. Chang</a> at April 18, 2014 01:30 AM

Macroprudential oversight, risk communication and visualization. (arXiv:1404.4550v1 [q-fin.CP])

This paper discusses the role of risk communication in macroprudential oversight and of visualization in risk communication. Beyond the soar in availability and precision of data, the transition from firm-centric to system-wide supervision imposes obvious data needs. Moreover, broad and effective communication of timely information related to systemic risks is a key mandate of macroprudential supervisors, which further stresses the importance of simple representations of complex data. Risk communication comprises two tasks: internal and external dissemination of information about systemic risks. This paper focuses on the background and theory of information visualization and visual analytics, as well as techniques provided within these fields, as potential means for risk communication. We define the task of visualization in internal and external risk communication, and provide a discussion of the type of available macroprudential data and an overview of visualization techniques applied to systemic risk. We conclude that two essential, yet rare, features for supporting the analysis of big data and communication of risks are analytical visualizations and interactive interfaces. This is illustrated with implementations of three analytical visualizations and five web-based interactive visualizations to systemic risk indicators and models.

by <a href="">Peter Sarlin</a> at April 18, 2014 01:30 AM

Collective computation in a network with distributed information. (arXiv:1404.4540v1 [cs.SI])

We analyze a distributed information network in which each node has access to the information contained in a limited set of nodes (its neighborhood) at a given time. A collective computation is carried out in which each node calculates a value that implies all information contained in the network (in our case, the average value of a variable that can take different values in each network node). The neighborhoods can change dynamically by exchanging neighbors with other nodes. The results of this collective calculation show rapid convergence and good scalability with the network size. These results are compared with those of a fixed network arranged as a square lattice, in which the number of rounds to achieve a given accuracy is very high when the size of the network increases. The results for the evolving networks are interpreted in light of the properties of complex networks and are directly relevant to the diameter and characteristic path length of the networks, which seem to express "small world" properties.

by <a href="">Antonio C&#xf3;rdoba</a>, <a href="">Daniel Aguilar-Hidalgo</a>, <a href="">M. Carmen Lemos</a> at April 18, 2014 01:30 AM

Retargeting Without Tracking. (arXiv:1404.4533v1 [cs.CR])

Retargeting ads are increasingly prevalent on the Internet as their effectiveness has been shown to outperform conventional targeted ads. Retargeting ads are not only based on users' interests, but also on their intents, i.e. commercial products users have shown interest in. Existing retargeting systems heavily rely on tracking, as retargeting companies need to know not only the websites a user has visited but also the exact products on these sites. They are therefore very intrusive, and privacy threatening. Furthermore, these schemes are still sub-optimal since tracking is partial, and they often deliver ads that are obsolete (because, for example, the targeted user has already bought the advertised product).

This paper presents the first privacy-preserving retargeting ads system. In the proposed scheme, the retargeting algorithm is distributed between the user and the advertiser such that no systematic tracking is necessary, more control and transparency is provided to users, but still a lot of targeting flexibility is provided to advertisers. We show that our scheme, that relies on homomorphic encryption, can be efficiently implemented and trivially solves many problems of existing schemes, such as frequency capping and ads freshness.

by <a href="">Minh-Dung Tran</a>, <a href="">Gergely Acs</a>, <a href="">Claude Castelluccia</a> at April 18, 2014 01:30 AM

A Complete Solver for Constraint Games. (arXiv:1404.4502v1 [cs.GT])

Game Theory studies situations in which multiple agents having conflicting objectives have to reach a collective decision. The question of a compact representation language for agents utility function is of crucial importance since the classical representation of a $n$-players game is given by a $n$-dimensional matrix of exponential size for each player. In this paper we use the framework of Constraint Games in which CSP are used to represent utilities. Constraint Programming --including global constraints-- allows to easily give a compact and elegant model to many useful games. Constraint Games come in two flavors: Constraint Satisfaction Games and Constraint Optimization Games, the first one using satisfaction to define boolean utilities. In addition to multimatrix games, it is also possible to model more complex games where hard constraints forbid certain situations. In this paper we study complete search techniques and show that our solver using the compact representation of Constraint Games is faster than the classical game solver Gambit by one to two orders of magnitude.

by <a href="">Thi-Van-Anh Nguyen</a>, <a href="">Arnaud Lallouet</a> at April 18, 2014 01:30 AM

Boxicity and separation dimension. (arXiv:1404.4486v1 [math.CO])

A family $\mathcal{F}$ of permutations of the vertices of a hypergraph $H$ is called 'pairwise suitable' for $H$ if, for every pair of disjoint edges in $H$, there exists a permutation in $\mathcal{F}$ in which all the vertices in one edge precede those in the other. The cardinality of a smallest such family of permutations for $H$ is called the 'separation dimension' of $H$ and is denoted by $\pi(H)$. Equivalently, $\pi(H)$ is the smallest natural number $k$ so that the vertices of $H$ can be embedded in $\mathbb{R}^k$ such that any two disjoint edges of $H$ can be separated by a hyperplane normal to one of the axes. We show that the separation dimension of a hypergraph $H$ is equal to the 'boxicity' of the line graph of $H$. This connection helps us in borrowing results and techniques from the extensive literature on boxicity to study the concept of separation dimension.

by <a href="">Manu Basavaraju</a>, <a href="">L. Sunil Chandran</a>, <a href="">Martin Charles Golumbic</a>, <a href="">Rogers Mathew</a> at April 18, 2014 01:30 AM

Separation dimension of sparse graphs. (arXiv:1404.4484v1 [math.CO])

The separation dimension of a graph $G$ is the smallest natural number $k$ for which the vertices of $G$ can be embedded in $\mathbb{R}^k$ such that any pair of disjoint edges in $G$ can be separated by a hyperplane normal to one of the axes. Equivalently, it is the smallest possible cardinality of a family $\mathcal{F}$ of permutations of the vertices of $G$ such that for any two disjoint edges of $G$, there exists at least one permutation in $\mathcal{F}$ in which all the vertices in one edge precede those in the other. In general, the maximum separation dimension of a graph on $n$ vertices is $\Theta(\log n)$. In this article, we focus on sparse graphs and show that the maximum separation dimension of a $k$-degenerate graph on $n$ vertices is $O(k \log\log n)$ and that there exists a family of $2$-degenerate graphs with separation dimension $\Omega(\log\log n)$. We also show that the separation dimension of the graph $G^{1/2}$ obtained by subdividing once every edge of another graph $G$ is at most $(1 + o(1)) \log\log \chi(G)$ where $\chi(G)$ is the chromatic number of the original graph.

by <a href="">Manu Basavaraju</a>, <a href="">L. Sunil Chandran</a>, <a href="">Rogers Mathew</a>, <a href="">Deepak Rajendraprasad</a> at April 18, 2014 01:30 AM

On Independence Atoms and Keys. (arXiv:1404.4468v1 [cs.DB])

Uniqueness and independence are two fundamental properties of data. Their enforcement in database systems can lead to higher quality data, faster data service response time, better data-driven decision making and knowledge discovery from data. The applications can be effectively unlocked by providing efficient solutions to the underlying implication problems of keys and independence atoms. Indeed, for the sole class of keys and the sole class of independence atoms the associated finite and general implication problems coincide and enjoy simple axiomatizations. However, the situation changes drastically when keys and independence atoms are combined. We show that the finite and the general implication problems are already different for keys and unary independence atoms. Furthermore, we establish a finite axiomatization for the general implication problem, and show that the finite implication problem does not enjoy a k-ary axiomatization for any k.

by <a href="">Miika Hannula</a>, <a href="">Juha Kontinen</a>, <a href="">Sebastian Link</a> at April 18, 2014 01:30 AM

Low-power Distance Bounding. (arXiv:1404.4435v1 [cs.CR])

A distance bounding system guarantees an upper bound on the physical distance between a verifier and a prover. However, in contrast to a conventional wireless communication system, distance bounding systems introduce tight requirements on the processing delay at the prover and require high distance measurement precision making their practical realization challenging. Prior proposals of distance bounding systems focused primarily on building provers with minimal processing delays but did not consider the power limitations of provers and verifiers. However, in a wide range of applications (e.g., physical access control), provers are expected to be fully or semi-passive introducing additional constraints on the design and implementation of distance bounding systems.

In this work, we propose a new physical layer scheme for distance bounding and leverage this scheme to implement a distance bounding system with a low-power prover. Our physical layer combines frequency modulated continuous wave (FMCW) and backscatter communication. The use of backscatter communication enables low power consumption at the prover which is critical for a number of distance bounding applications. By using the FMCW-based physical layer, we further decouple the physical distance estimation from the processing delay at the prover, thereby enabling the realization of the majority of distance bounding protocols developed in prior art. We evaluate our system under various attack scenarios and show that it offers strong security guarantees against distance, mafia and terrorist frauds. Additionally, we validate the communication and distance measurement characteristics of our system through simulations and experiments and show that it is well suited for short-range physical access control and payment applications.

by <a href="">Aanjhan Ranganathan</a>, <a href="">Boris Danev</a>, <a href="">Srdjan Capkun</a> at April 18, 2014 01:30 AM

A heuristic prover for real inequalities. (arXiv:1404.4410v1 [cs.MS])

We describe a general method for verifying inequalities between real-valued expressions, especially the kinds of straightforward inferences that arise in interactive theorem proving. In contrast to approaches that aim to be complete with respect to a particular language or class of formulas, our method establishes claims that require heterogeneous forms of reasoning, relying on a Nelson-Oppen-style architecture in which special-purpose modules collaborate and share information. The framework is thus modular and extensible. A prototype implementation shows that the method is promising, complementing techniques that are used by contemporary interactive provers.

by <a href="">Jeremy Avigad</a>, <a href="">Robert Y. Lewis</a>, <a href="">Cody Roux</a> at April 18, 2014 01:30 AM

Partially Observed, Multi-objective Markov Games. (arXiv:1404.4388v1 [math.OC])

The intent of this research is to generate a set of non-dominated policies from which one of two agents (the leader) can select a most preferred policy to control a dynamic system that is also affected by the control decisions of the other agent (the follower). The problem is described by an infinite horizon, partially observed Markov game (POMG). At each decision epoch, each agent knows: its past and present states, its past actions, and noise corrupted observations of the other agent's past and present states. The actions of each agent are determined at each decision epoch based on these data. The leader considers multiple objectives in selecting its policy. The follower considers a single objective in selecting its policy with complete knowledge of and in response to the policy selected by the leader. This leader-follower assumption allows the POMG to be transformed into a specially structured, partially observed Markov decision process (POMDP). This POMDP is used to determine the follower's best response policy. A multi-objective genetic algorithm (MOGA) is used to create the next generation of leader policies based on the fitness measures of each leader policy in the current generation. Computing a fitness measure for a leader policy requires a value determination calculation, given the leader policy and the follower's best response policy. The policies from which the leader can select a most preferred policy are the non-dominated policies of the final generation of leader policies created by the MOGA. An example is presented that illustrates how these results can be used to support a manager of a liquid egg production process (the leader) in selecting a sequence of actions to best control this process over time, given that there is an attacker (the follower) who seeks to contaminate the liquid egg production process with a chemical or biological toxin.

by <a href="">Yanling Chang</a>, <a href="">Alan L. Erera</a>, <a href="">Chelsea C. White III</a> at April 18, 2014 01:30 AM

Latency-Bounded Target Set Selection in Social Networks. (arXiv:1303.6785v2 [cs.DS] UPDATED)

Motivated by applications in sociology, economy and medicine, we study variants of the Target Set Selection problem, first proposed by Kempe, Kleinberg and Tardos. In our scenario one is given a graph $G=(V,E)$, integer values $t(v)$ for each vertex $v$ (\emph{thresholds}), and the objective is to determine a small set of vertices (\emph{target set}) that activates a given number (or a given subset) of vertices of $G$ \emph{within} a prescribed number of rounds. The activation process in $G$ proceeds as follows: initially, at round 0, all vertices in the target set are activated; subsequently at each round $r\geq 1$ every vertex of $G$ becomes activated if at least $t(v)$ of its neighbors are already active by round $r-1$. It is known that the problem of finding a minimum cardinality Target Set that eventually activates the whole graph $G$ is hard to approximate to a factor better than $O(2^{\log^{1-\epsilon}|V|})$. In this paper we give \emph{exact} polynomial time algorithms to find minimum cardinality Target Sets in graphs of bounded clique-width, and \emph{exact} linear time algorithms for trees.

by <a href="">Ferdinando Cicalese</a>, <a href="">Gennaro Cordasco</a>, <a href="">Luisa Gargano</a>, <a href="">M. Milanic</a>, <a href="">Ugo Vaccaro</a> at April 18, 2014 01:30 AM

DANCE: A Framework for the Distributed Assessment of Network Centralities. (arXiv:1108.1067v2 [cs.NI] UPDATED)

The analysis of large-scale complex networks is a major challenge in the Big Data domain. Given the large-scale of the complex networks researchers commonly deal with nowadays, the use of localized information (i.e. restricted to a limited neighborhood around each node of the network) for centrality-based analysis is gaining momentum in the recent literature. In this context, we propose a framework for the Distributed Assessment of Network Centralities (DANCE) in complex networks. DANCE offers a single environment that allows the use of different localized centrality proposals, which can be tailored to specific applications. This environment can be thus useful given the vast potential applicability of centrality-based analysis on large-scale complex networks found in different areas, such as Biology, Physics, Sociology, or Computer Science. Since the localized centrality proposals DANCE implements employ only localized information, DANCE can easily benefit from parallel processing environments and run on different computing architectures. To illustrate this, we present a parallel implementation of DANCE and show how it can be applied to the analysis of large-scale complex networks using different kinds of network centralities. This implementation is made available to complex network researchers and practitioners interested in using it through a scientific web portal.

by <a href="">Klaus Wehmuth</a>, <a href="">Antonio Tadeu A. Gomes</a>, <a href="">Artur Ziviani</a> at April 18, 2014 01:30 AM


Examples of functional programming in R [on hold]

When functional programming style can really help in R? More specifically, what would be one (or some) good code example(s) showing situations where a "functional approach" would be well suited in R - examples of situations where the functional programming techniques would really make a difference?

Some resources to base the answer could be:

PS: I have edited the question, maybe it is more specific now.

by Carlos Cinelli at April 18, 2014 01:17 AM

Read entire file in Scala?

What's a simple and canonical way to read an entire file into memory in Scala? (Ideally, with control over character encoding.)

The best I can come up with is:"file.txt").getLines.reduceLeft(_+_)

or am I supposed to use one of Java's god-awful idioms, the best of which (without using an external library) seems to be:

import java.util.Scanner
new Scanner(new File("file.txt")).useDelimiter("\\Z").next()

From reading mailing list discussions, it's not clear to me that is even supposed to be the canonical I/O library. I don't understand what its intended purpose is, exactly.

... I'd like something dead-simple and easy to remember. For example, in these languages it's very hard to forget the idiom ...

Ruby    open("file.txt").read
Python  open("file.txt").read()

by Brendan OConnor at April 18, 2014 01:17 AM

Portland Pattern Repository


Are promises flawed?

Like the author of this question I'm trying to understand the reasoning for user-visible promises in Scala 2.10's futures and promises.

Particularly, going again to the example from the SIP, isn't it completely flawed:

import scala.concurrent.{ future, promise }
val p = promise[T]
val f = p.future
val producer = future {
  val r = produceSomething()
  p success r
val consumer = future {
  f onSuccess {
    case r => doSomethingWithResult()

I am imagining the case where the call to produceSomething results in a runtime exception. Because promise and producer-future are completely detached, this means the system hangs and the consumer will never complete with either success or failure.

So the only safe way to use promises requires something like

val producer = future {
  try {
    val r.produceSomething()
    p success r
  } catch {
     case e: Throwable =>
       p failure e
       throw e  // ouch

Which obviously error-prone and verbose.

The only case I can see for a visible promise type—where future {} is insufficient—is the one of the callback hook in M. A. D.'s answer. But the example of the SIP doesn't make sense to me.

by 0__ at April 18, 2014 12:54 AM

Planet Theory

PReaCH: A Fast Lightweight Reachability Index using Pruning and Contraction Hierarchies

Authors: Florian Merz, Peter Sanders
Download: PDF
Abstract: We develop the data structure PReaCH (for Pruned Reachability Contraction Hierarchies) which supports reachability queries in a directed graph, i.e., it supports queries that ask whether two nodes in the graph are connected by a directed path. PReaCH adapts the contraction hierarchy speedup techniques for shortest path queries to the reachability setting. The resulting approach is surprisingly simple and guarantees linear space and near linear preprocessing time. Orthogonally to that, we improve existing pruning techniques for the search by gathering more information from a single DFS-traversal of the graph. PReaCH-indices significantly outperform previous data structures with comparable preprocessing cost. Methods with faster queries need significantly more preprocessing time in particular for the most difficult instances.

April 18, 2014 12:42 AM

A Simple Order-Oblivious O(log log(rank))-Competitive Algorithm for the Matroid Secretary Problem

Authors: Moran Feldman, Ola Svensson, Rico Zenklusen
Download: PDF
Abstract: We present an O(log log(rank)) algorithm for the matroid secretary problem. Our algorithm can be interpreted as a distribution over a simple type of matroid secretary algorithms which are easy to analyze. This improves on the previously best algorithm for the matroid secretary problem in both the competitive ratio and its simplicity. Furthermore, our procedure is order-oblivious, which implies that it leads to an O(log log(rank)) competitive algorithm for single-sample prophet inequalities.

April 18, 2014 12:42 AM

Approximation Algorithms for Hypergraph Small Set Expansion and Small Set Vertex Expansion

Authors: Anand Louis, Yury Makarychev
Download: PDF
Abstract: The expansion of a hypergraph, a natural extension of the notion of expansion in graphs, is defined as the minimum over all cuts in the hypergraph of the ratio of the number of the hyperedges cut to the size of the smaller side of the cut. We study the Hypergraph Small Set Expansion problem, which, for a parameter $\delta \in (0,1/2]$, asks to compute the cut having the least expansion while having at most $\delta$ fraction of the vertices on the smaller side of the cut. We present two algorithms. Our first algorithm gives an $\tilde O(\delta^{-1} \sqrt{\log n})$ approximation. The second algorithm finds a set with expansion $\tilde O(\delta^{-1}(\sqrt{d_{\text{max}}r^{-1}\log r\, \phi^*} + \phi^*))$ in a $r$--uniform hypergraph with maximum degree $d_{\text{max}}$ (where $\phi^*$ is the expansion of the optimal solution). Using these results, we also obtain algorithms for the Small Set Vertex Expansion problem: we get an $\tilde O(\delta^{-1} \sqrt{\log n})$ approximation algorithm and an algorithm that finds a set with vertex expansion $O\left(\delta^{-1}\sqrt{\phi^V \log d_{\text{max}} } + \delta^{-1} \phi^V\right)$ (where $\phi^V$ is the vertex expansion of the optimal solution).

For $\delta=1/2$, Hypergraph Small Set Expansion is equivalent to the hypergraph expansion problem. In this case, our approximation factor of $O(\sqrt{\log n})$ for expansion in hypergraphs matches the corresponding approximation factor for expansion in graphs due to ARV.

April 18, 2014 12:42 AM

An All-Around Near-Optimal Solution for the Classic Bin Packing Problem

Authors: Shahin Kamali, Alejandro López-Ortiz
Download: PDF
Abstract: In this paper we present the first algorithm with optimal average-case and close-to-best known worst-case performance for the classic on-line problem of bin packing. It has long been observed that known bin packing algorithms with optimal average-case performance were not optimal in the worst-case sense. In particular First Fit and Best Fit had optimal average-case ratio of 1 but a worst-case competitive ratio of 1.7. The wasted space of First Fit and Best Fit for a uniform random sequence of length $n$ is expected to be $\Theta(n^{2/3})$ and $\Theta(\sqrt{n} \log ^{3/4} n)$, respectively. The competitive ratio can be improved to 1.691 using the Harmonic algorithm; further variations of this algorithm can push down the competitive ratio to 1.588. However, Harmonic and its variations have poor performance on average; in particular, Harmonic has average-case ratio of around 1.27. In this paper, first we introduce a simple algorithm which we term Harmonic Match. This algorithm performs as well as Best Fit on average, i.e., it has an average-case ratio of 1 and expected wasted space of $\Theta(\sqrt{n} \log ^{3/4} n)$. Moreover, the competitive ratio of the algorithm is as good as Harmonic, i.e., it converges to $ 1.691$ which is an improvement over 1.7 of Best Fit and First Fit. We also introduce a different algorithm, termed as Refined Harmonic Match, which achieves an improved competitive ratio of $1.636$ while maintaining the good average-case performance of Harmonic Match and Best Fit. Finally, our extensive experimental evaluation of the studied bin packing algorithms shows that our proposed algorithms have comparable average-case performance with Best Fit and First Fit, and this holds also for sequences that follow distributions other than the uniform distribution.

April 18, 2014 12:42 AM

On the Tree Search Problem with Non-uniform Costs

Authors: Ferdinando Cicalese, Balázs Keszegh, Bernard Lidický, Dömötör Pálvölgyi, Tomáš Valla
Download: PDF
Abstract: Searching in partially ordered structures has been considered in the context of information retrieval and efficient tree-like indexes, as well as in hierarchy based knowledge representation. In this paper we focus on tree-like partial orders and consider the problem of identifying an initially unknown vertex in a tree by asking edge queries: an edge query $e$ returns the component of $T-e$ containing the vertex sought for, while incurring some known cost $c(e)$.

The Tree Search Problem with Non-Uniform Cost is: given a tree $T$ where each edge has an associated cost, construct a strategy that minimizes the total cost of the identification in the worst case.

Finding the strategy guaranteeing the minimum possible cost is an NP-complete problem already for input tree of degree 3 or diameter 6. The best known approximation guarantee is the $O(\log n/\log \log \log n)$-approximation algorithm of [Cicalese et al. TCS 2012].

We improve upon the above results both from the algorithmic and the computational complexity point of view: We provide a novel algorithm that provides an $O(\frac{\log n}{\log \log n})$-approximation of the cost of the optimal strategy. In addition, we show that finding an optimal strategy is NP-complete even when the input tree is a spider, i.e., at most one vertex has degree larger than 2.

April 18, 2014 12:42 AM

Deterministic Truncation of Linear Matroids

Authors: Daniel Lokshtanov, Pranabendu Misra, Fahad Panolan, Saket Saurabh
Download: PDF
Abstract: Let $M=(E,{\cal I})$ be a matroid. A {\em $k$-truncation} of $M$ is a matroid {$M'=(E,{\cal I}')$} such that for any $A\subseteq E$, $A\in {\cal I}'$ if and only if $|A|\leq k$ and $A\in {\cal I}$. Given a linear representation of $M$ we consider the problem of finding a linear representation of the $k$-truncation of this matroid. This problem can be abstracted out to the following problem on matrices. Let $M$ be a $n\times m$ matrix over a field $\mathbb{F}$. A {\em rank $k$-truncation} of the matrix $M$ is a $k\times m$ matrix $M_k$ (over $\mathbb{F}$ or a related field) such that for every subset $I\subseteq \{1,\ldots,m\}$ of size at most $k$, the set of columns corresponding to $I$ in $M$ has rank $|I|$ if and only of the corresponding set of columns in $M_k$ has rank $|I|$. Finding rank $k$-truncation of matrices is a common way to obtain a linear representation of $k$-truncation of linear matroids, which has many algorithmic applications. A common way to compute a rank $k$-truncation of a $n \times m$ matrix is to multiply the matrix with a random $k\times n$ matrix (with the entries from a field of an appropriate size), yielding a simple randomized algorithm. So a natural question is whether it possible to obtain a rank $k$-truncations of a matrix, {\em deterministically}. In this paper we settle this question for matrices over any finite field or the field of rationals ($\mathbb Q$). We show that given a matrix $M$ over a field $\mathbb{F}$ we can compute a $k$-truncation $M_k$ over the ring $\mathbb{F}[X]$ in deterministic polynomial time.

April 18, 2014 12:42 AM

A Control Dichotomy for Pure Scoring Rules

Authors: Edith Hemaspaandra, Lane A. Hemaspaandra, Henning Schnoor
Download: PDF
Abstract: Scoring systems are an extremely important class of election systems. A length-$m$ (so-called) scoring vector applies only to $m$-candidate elections. To handle general elections, one must use a family of vectors, one per length. The most elegant approach to making sure such families are "family-like" is the recently introduced notion of (polynomial-time uniform) pure scoring rules [Betzler and Dorn 2010], where each scoring vector is obtained from its precursor by adding one new coefficient. We obtain the first dichotomy theorem for pure scoring rules for a control problem. In particular, for constructive control by adding voters (CCAV), we show that CCAV is solvable in polynomial time for $k$-approval with $k \leq 3$, $k$-veto with $k \leq 2$, every pure scoring rule in which only the two top-rated candidates gain nonzero scores, and a particular rule that is a "hybrid" of 1-approval and 1-veto. For all other pure scoring rules, CCAV is NP-complete. We also investigate the descriptive richness of different models for defining pure scoring rules, proving how more rule-generation time gives more rules, proving that rationals give more rules than do the natural numbers, and proving that some restrictions previously thought to be "w.l.o.g." in fact do lose generality.

April 18, 2014 12:42 AM

Rainbow Colouring of Split Graphs

Authors: L. Sunil Chandran, Deepak Rajendraprasad, Marek Tesař
Download: PDF
Abstract: A rainbow path in an edge coloured graph is a path in which no two edges are coloured the same. A rainbow colouring of a connected graph G is a colouring of the edges of G such that every pair of vertices in G is connected by at least one rainbow path. The minimum number of colours required to rainbow colour G is called its rainbow connection number. Between them, Chakraborty et al. [J. Comb. Optim., 2011] and Ananth et al. [FSTTCS, 2012] have shown that for every integer k, k \geq 2, it is NP-complete to decide whether a given graph can be rainbow coloured using k colours.

A split graph is a graph whose vertex set can be partitioned into a clique and an independent set. Chandran and Rajendraprasad have shown that the problem of deciding whether a given split graph G can be rainbow coloured using 3 colours is NP-complete and further have described a linear time algorithm to rainbow colour any split graph using at most one colour more than the optimum [COCOON, 2012]. In this article, we settle the computational complexity of the problem on split graphs and thereby discover an interesting dichotomy. Specifically, we show that the problem of deciding whether a given split graph can be rainbow coloured using k colours is NP-complete for k \in {2,3}, but can be solved in polynomial time for all other values of k.

April 18, 2014 12:42 AM

A characterization of eventually periodicity

Authors: Teturo Kamae, Dong Han Kim
Download: PDF
Abstract: In this article, we show that the Kamae-Xue complexity function for an infinite sequence classifies eventual periodicity completely. We prove that an infinite binary word $x_1x_2 \cdots $ is eventually periodic if and only if $\Sigma(x_1x_2\cdots x_n)/n^3$ has a positive limit, where $\Sigma(x_1x_2\cdots x_n)$ is the sum of the squares of all the numbers of appearance of finite words in $x_1 x_2 \cdots x_n$, which was introduced by Kamae-Xue as a criterion of randomness in the sense that $x_1x_2\cdots x_n$ is more random if $\Sigma(x_1x_2\cdots x_n)$ is smaller. In fact, it is known that the lower limit of $\Sigma(x_1x_2\cdots x_n) /n^2 $ is at least 3/2 for any sequence $x_1x_2 \cdots$, while the limit exists as 3/2 almost surely for the $(1/2,1/2)$ product measure. For the other extreme, the upper limit of $\Sigma(x_1x_2\cdots x_n)/n^3$ is bounded by 1/3. There are sequences which are not eventually periodic but the lower limit of $\Sigma(x_1x_2\cdots x_n)/n^3$ is positive, while the limit does not exist.

April 18, 2014 12:41 AM

Set Families with Low Pairwise Intersection

Authors: Calvin Beideman, Jeremiah Blocki
Download: PDF
Abstract: A $\left(n,\ell,\gamma\right)$-sharing set family of size $m$ is a family of sets $S_1,\ldots,S_m\subseteq [n]$ s.t. each set has size $\ell$ and each pair of sets shares at most $\gamma$ elements. We let $m\left(n,\ell,\gamma\right)$ denote the maximum size of any such set family and we consider the following question: How large can $m\left(n,\ell,\gamma\right)$ be? $\left(n,\ell,\gamma\right)$-sharing set families have a rich set of applications including the construction of pseudorandom number generators and usable and secure password management schemes. We analyze the explicit construction of Blocki et al using recent bounds on the value of the $t$'th Ramanujan prime. We show that this explicit construction produces a $\left(4\ell^2\ln 4\ell,\ell,\gamma\right)$-sharing set family of size $\left(2 \ell \ln 2\ell\right)^{\gamma+1}$ for any $\ell\geq \gamma$. We also show that the construction of Blocki et al can be used to obtain a weak $\left(n,\ell,\gamma\right)$-sharing set family of size $m$ for any $m >0$. These results are competitive with the inexplicit construction of Raz et al for weak $\left(n,\ell,\gamma\right)$-sharing families. We show that our explicit construction of weak $\left(n,\ell,\gamma\right)$-sharing set families can be used to obtain a parallelizable pseudorandom number generator with a low memory footprint by using the pseudorandom number generator of Nisan and Wigderson. We also prove that $m\left(n,n/c_1,c_2n\right)$ must be a constant whenever $c_2 \leq \frac{2}{c_1^3+c_1^2}$. We show that this bound is nearly tight as $m\left(n,n/c_1,c_2n\right)$ grows exponentially fast whenever $c_2 > c_1^{-2}$.

April 18, 2014 12:41 AM


How to explain rehosting and retargeting with T-diagrams?

I'm currently learning for an exam about compilers and found the following question:

(3 p.) Bootstrapping: Explain the concepts of rehosting and retargeting. Use T-diagrams.

As far as I understand, rehosting means to compile a compiler for another platform (host), so it should look like this:

| a       b |     --------------
-----   -----     | a        b |
    | c |-------------    ------
    -----| c       x || x |
         -----   ----------
             | ? |

Is this correct? And what does retargeting mean?

by Thomas Uhrig at April 18, 2014 12:39 AM


How do I get into a good Master's for Software Engineering? (

I'm currently doing a BEng in CS at a Russell Group university (2nd year). I'm looking to apply for a SE master's at Oxford and Imperial after I finish this - how do I prepare myself for this?

submitted by just_passing_bye
[link] [1 comment]

April 18, 2014 12:34 AM


Spark: what's the best strategy for joining a 2-tuple-key RDD with single-key RDD?

I have two RDD's that I want to join and they look like this:

val rdd1:RDD[(T,U)]
val rdd2:RDD[((T,W), V)]

It happens to be the case that the key values of rdd1 are unique and also that the tuple-key values of rdd2 are unique. I'd like to join the two data sets so that I get the following rdd:

val rdd_joined:RDD[((T,W), (U,V))]

What's the most efficient way to achieve this? Here are a few ideas I've thought of.

Option 1:

val m = rdd1.collectAsMap
val rdd_joined ={case ((t,w), u) => ((t,w), u, m.get(t))})

Option 2:

val distinct_w ={case ((t,w), u) => w}).distinct
val rdd_joined = rdd1.cartesian(distinct_w).join(rdd2)

Option 1 will collect all of the data to master, right? So that doesn't seem like a good option if rdd1 is large (it's relatively large in my case, although an order of magnitude smaller than rdd2). Option 2 does an ugly distinct and cartesian product, which also seems very inefficient. Another possibility that crossed my mind (but haven't tried yet) is to do option 1 and broadcast the map, although it would be better to broadcast in a "smart" way so that the keys of the map are co-located with the keys of rdd2.

Has anyone come across this sort of situation before? I'd be happy to have your thoughts.


by RyanH at April 18, 2014 12:25 AM

Error on running Kafka-0.8.1 for Log4j Appender

I am trying to push log data into kafka broker by using log4j appender. I can't push message into broker, is there anyone can help me for this issue? I am using log4j-1.2.15jar and kafka_2.9.2-0.8.1.jar. This is my appender:

<?xml version="1.0" encoding="UTF-8" ?>
    <!DOCTYPE log4j:configuration SYSTEM "log4j.dtd">

    <log4j:configuration xmlns:log4j="">

        <appender name="console" class="org.apache.log4j.ConsoleAppender">
            <param name="Target" value="System.out"/>
            <layout class="org.apache.log4j.PatternLayout">
                <param name="ConversionPattern" value="%-5p %c{1} - %m%n"/>

     <appender class="kafka.producer.KafkaLog4jAppender" name="kafka">
        <param name="BrokerList" value="oos007:9092"/>
        <param name="Topic" value="SessionSpy"/>
        <layout class="org.apache.log4j.PatternLayout">
          <param value="%d{ISO8601} %p %c{2} - %m%n" name="ConversionPattern"/>

      <logger name="kafka.push">
        <level value="ALL"/>
        <appender-ref ref="kafka"/>

            <level value="ALL"/>
            <appender-ref ref="console"/>


My test class is :

package com.kafka.test;

import org.apache.log4j.Logger;

public class KafkaLogTester {

    protected static final Logger logger = Logger.getLogger(KafkaLogTester.class);

    public static void main(String[] args) {"OMG! It works! ");


And I got these error.

log4j:WARN No appenders could be found for logger (kafka.utils.VerifiableProperties).
log4j:WARN Please initialize the log4j system properly.
log4j:WARN Error during default initialization
java.lang.NoClassDefFoundError: com/yammer/metrics/Metrics
    at kafka.metrics.KafkaMetricsGroup$class.newMeter(KafkaMetricsGroup.scala:46)
    at kafka.producer.ProducerStats.newMeter(ProducerStats.scala:23)
    at kafka.producer.ProducerStats.<init>(ProducerStats.scala:24)
    at kafka.producer.ProducerStatsRegistry$$anonfun$1.apply(ProducerStats.scala:33)
    at kafka.producer.ProducerStatsRegistry$$anonfun$1.apply(ProducerStats.scala:33)
    at kafka.utils.Pool.getAndMaybePut(Pool.scala:61)
    at kafka.producer.ProducerStatsRegistry$.getProducerStats(ProducerStats.scala:37)
    at kafka.producer.async.DefaultEventHandler.<init>(DefaultEventHandler.scala:48)
    at kafka.producer.Producer.<init>(Producer.scala:59)
    at kafka.producer.KafkaLog4jAppender.activateOptions(KafkaLog4jAppender.scala:84)
    at org.apache.log4j.config.PropertySetter.activate(
    at org.apache.log4j.xml.DOMConfigurator.parseAppender(
    at org.apache.log4j.xml.DOMConfigurator.findAppenderByName(
    at org.apache.log4j.xml.DOMConfigurator.findAppenderByReference(
    at org.apache.log4j.xml.DOMConfigurator.parseChildrenOfLoggerElement(
    at org.apache.log4j.xml.DOMConfigurator.parseCategory(
    at org.apache.log4j.xml.DOMConfigurator.parse(
    at org.apache.log4j.xml.DOMConfigurator.doConfigure(
    at org.apache.log4j.xml.DOMConfigurator.doConfigure(
    at org.apache.log4j.helpers.OptionConverter.selectAndConfigure(
    at org.apache.log4j.LogManager.<clinit>(
    at org.apache.log4j.Logger.getLogger(
    at com.kafka.test.KafkaLogTester.<clinit>(
Caused by: java.lang.ClassNotFoundException: com.yammer.metrics.Metrics
    at Method)
    at java.lang.ClassLoader.loadClass(
    at sun.misc.Launcher$AppClassLoader.loadClass(
    at java.lang.ClassLoader.loadClass(
    ... 23 more

by user3525639 at April 18, 2014 12:23 AM


What does it mean for CvRDT replicas to transmit their state "infinitely often"?

In Shapiro et al.'s SSS '11 paper on Conflict-Free Replicated Data Types for eventual consistency of distributed replicated objects, they describe a system model in which replicas transmit their state to one another "infinitely often". On the receiving end, a replica can merge the received state with its own local state by executing a method m.

In the case of convergent replicated data types, or CvRDTs, the states a replica can take on are elements of a (join-semi)lattice, and the m operation takes the join of the the received state and the local state with respect to the lattice. Also, replicas can update their local state (by calling an update method, u), but only in a way that is inflationary with respect to the lattice (that is, the state can only "get bigger").

My questions have to do with the "infinitely often" bit, and are as follows:

  • Is it just the state transmission that occurs infinitely often, or also the calls to m? That is, does the process running at some replica have to explicitly call the m method in order to do a merge, or are these merges happening infinitely often "in the background"? (It seems to me that it must be happening infinitely often in the background, because otherwise, if a process didn't call m at the end of its run, then its replica wouldn't converge with the others, breaking the eventual consistency guarantee that CvRDTs provide.)

  • But, if merges occur infinitely often, do updates to a CvRDT really have to be inflationary? In the presence of infinitely-often merges, it seems like if a non-inflationary update ever happened, that update would just be lost -- which would be unfortunate, but wouldn't actually pose a problem for convergence. When I look at the proof that CvRDTs are eventually consistent, I don't see the place that the inflationary condition on the u method is required. Why exactly does u have to be inflationary?

  • Finally, the definition of causal history in the paper seems fishy if merges are really happening "infinitely often", because if so there would be "no room" for any other method execution to occur! The k'th method execution would have to be a merge, for all k, wouldn't it? Am I taking "infinitely often" too literally? (Update: Yes, I am taking it too literally! See below.)


(Update: After a Twitter discussion with Niklas Ekström, I understand the meaning of "infinitely often" better. If an event occurs "infinitely often", that doesn't mean anything about the frequency of it occurring; it just means that the event occurs an infinite number of times.

So, if event X occurs infinitely often, and some other event Y occurs once (or some finite number of times), then X is guaranteed to occur after Y, because infinitely many occurrences of X cannot all occur before the occurrence(s) of Y. And here, in particular, there is no way for a replica to update itself and for the neighbors not to find out, because a state transmission will always occur after that update (because there are infinitely many state transmissions, and therefore they can't all occur before that update!).

I'm still not sure if I should be thinking of state transmissions and merges as happening infinitely often, or just state transmissions. But, even if merges also occur infinitely often, my comment above about there being "no room" for any other event to occur doesn't make sense, considering what "infinitely often" actually means.)

by Lindsey Kuper at April 18, 2014 12:08 AM

Portland Pattern Repository


EM algorithm for two Gaussian models

This is about basic machine learning which I do not understand clearly.

I have 2 Gaussian models $G_1$ and $G_2$ and given a list of data as below.

(1, 3, 4, 5, 7, 8, 9, 13, 14, 15, 16, 18, 23)

I do not know which numbers were generated by which $G_i$ model, but I want to do Expectation Maximization for clustering.

So, what are the hidden variables? and how to perform Expectation and Maximization steps?

Could you show some calculation or short Python code of the likelihood metric? (Perhaps 2 iteration)

I got the basics of EM algorithm from here, but I cannot apply to this question.

by Karyo at April 18, 2014 12:00 AM

HN Daily

April 17, 2014


Financial Engineer as a career [on hold]

I'm now studying in Bachelors in economics.Can I become a financial engineer with this degree?? Why or why not?? And Also what is the minimum GPA do I need to get to that program

by user2970322 at April 17, 2014 11:57 PM

Lambda the Ultimate Forum

Looking for a good online forum on compiler design and implementation

Hi, I'm looking for a good online forum that focuses on the practical details of building compilers. I've posted a lot of material on LLVM over the years, however that forum is really dedicated to issues specific to LLVM and not compilers in general. Similarly, LtU is more focused on theory than practice (which is not to say that I'm not interested in theory.)

I'd like to find some place where I could post questions about things like efficient multimethod dispatch, garbage collection algorithms, and so on, and not feel like I was off-topic.

Also, when it comes to feedback I would favor quality over quantity.

Any suggestions?

April 17, 2014 11:53 PM


Scala: aggregate column based file

I have a huge file (does not fit into memory) which is tab separated with two columns (key and value), and pre-sorted on the key column. I need to call a function on all values for a key and write out the result. For simplicity, one can assume that the values are numbers and the function is addition.

So, given an input:

A 1
A 2
B 1
B 3

The output would be:

A 3
B 4

For this question, I'm not so much interested in reading/writing the file, but more in the list comprehension side. It is important though that the whole content (input as well as output) doesn't fit into memory. I'm new to Scala, and coming from Java I'm interested what would be the functional/Scala way to do that.


Based on AmigoNico's comment, I came up with the below constant memory solution. Any comments / improvements are appreciated!

val writeAggr = (kv : (String, Int)) => {println(kv._1 + " " + kv._2)}
  (  ("", 0) /:"/tmp/xx").getLines ) { (keyAggr, line) => 
    val Array(k,v) = line split ' '
    if (keyAggr._1.equals(k)) {
      (k, keyAggr._2 + v.toInt) 
    } else { 
      if (!keyAggr._1.equals("")) {
      (k, v.toInt)

by benroth at April 17, 2014 11:46 PM

Making Scala version of this Java method functional

Trying to port following Java method to Scala. Ending up with a lot of nested Map and ugly return statement from "foreach". converted method looks ugly just like its OO counterpart in Java.

by user2066049 at April 17, 2014 11:42 PM

Convert legacy Java code into Scala functional idioms

I am looking at some legacy Java code which now will have to be converted to new Scala system. Legacy code looks as below. In new Scala project I have all Java value objects as case classes What's the best way you recommend to bring this Java (OO styled and side effecting) over to Scala (without side effects, mutation etc)?

EDIT Does collectFirst look appropriate for Java break equivalent ?

ln.collectFirst{case l if(availableSlot(allowedSection,vehicle,l)) >1 => vehicle.copy(allocatedSlot = Some(5), allocatedLane = Some(l))}

by user2066049 at April 17, 2014 11:40 PM

Wes Felter

StorageReview: Samsung SSD 840 Pro Enterprise SSD Review

StorageReview: Samsung SSD 840 Pro Enterprise SSD Review:

This is great. Enterprise products need to prove their value over consumer versions; they shouldn’t get a free pass.

April 17, 2014 11:23 PM

"Any true audiophile will notice the warmer sound of proper laser-trimmed resistors"

“Any true audiophile will notice the warmer sound of proper laser-trimmed resistors”

- jff
Artisanal hand-trimmed, surely?

April 17, 2014 11:22 PM


I don't get the joke

In ZMQ messaging library there is large number of patterns derived from a base "Pirate" pattern. To quote the documentation:

I like to call the Pirate patterns (you'll eventually get the joke, I hope).

I have a pretty through understanding of the ZMQ architecture, having worked with it over a half dozen projects and couple of years. Despite this, and reading basically the entire guide, I don't get the joke.

Perhaps there isn't one, but I can't help the itch that I am missing something fairly obvious. Thanks.

by meawoppl at April 17, 2014 11:18 PM

Planet Clojure

Clojure Procedural Dungeons

Clojure Procedural Dungeons

From the webpage:

When making games, there are two ways to make a dungeon. The common method is to design one in the CAD tool of our choice (or to draw one in case of 2D games).

The alternative is to automatically generate random Dungeons by using a few very powerful algorithms. We could automatically generate a whole game world if we wanted to, but let’s take one step after another.

In this Tutorial we will implement procedural Dungeons in Clojure, while keeping everything as simple as possible so everyone can understand it.

Just in case you are interesting in a gaming approach for a topic maps interface.

Not as crazy as that may sound. One of the brightest CS types I ever knew spend a year playing a version of Myst from start to finish.

Think about app sales if you can make your interface addictive.

Suggestion: Populate your topic map authoring interface with trolls (accounting), smiths (manufacturing), cavalry (shipping), royalty (managment), wizards (IT), etc. and make collection of information about their information into tokens, spells, etc. Sprinkle in user preference activities and companions.

That would be a lot of work but I suspect you would get volunteers to create new levels as your information resources evolve.

by Patrick Durusau at April 17, 2014 11:13 PM

Planet Clojure

Meltdown 1.0.0-beta10 is released


Meltdown is a Clojure interface to Reactor, an asynchronous programming, event passing and stream processing toolkit for the JVM.

1.0.0-beta10 is a development milestone with minor improvements.

Changes between 1.0.0-beta9 and 1.0.0-beta10

Reactor Update

Reactor is updated to 1.1.0.M3.

2-arity of clojurewerkz.meltdown.reactor/on is Removed

Reactor 1.1.0.M3 no longer supports default key (selector), so 2-arity of clojurewerkz.meltdown.reactor/on was removed.

Clojure 1.6

Meltdown now depends on org.clojure/clojure version 1.6.0. It is still compatible with Clojure 1.4 and if your project.clj depends on a different version, it will be used, but 1.6 is the default now.

Changes between 1.0.0-beta8 and 1.0.0-beta9

Consumer and Selector Introspection

clojurewerkz.meltdown.selectors/selectors-on is a new function that returns a list of selectors registered on a reactor:

(require '[clojurewerkz.meltdown.reactor   :as mr])
(require '[clojurewerkz.meltdown.selectors :as ms :refer [$])

(let [r   (mr/create)]
  (mr/on r ($ "a.key) (fn [evt]))
  (ms/selectors-on r))

clojurewerkz.meltdown.consumers/consumer-count is a new function that returns a number of consumers registered on a reactor:

(require '[clojurewerkz.meltdown.reactor   :as mr])
(require '[clojurewerkz.meltdown.selectors :refer [$])
(require '[clojurewerkz.meltdown.consumers :as mc])

(let [r   (mr/create)]
  (mr/on r ($ "a.key) (fn [evt]))
  (mc/consumer-count r))

Change log

Meltodwn change log is available on GitHub.

Meltdown is a ClojureWerkz Project

Meltdown is part of the group of libraries known as ClojureWerkz, together with

  • Langohr, a Clojure client for RabbitMQ that embraces the AMQP 0.9.1 model
  • Elastisch, a Clojure client for ElasticSearch
  • Monger, a Clojure MongoDB client for a more civilized age
  • Cassaforte, a Clojure Cassandra client
  • Titanium, a Clojure graph library
  • Neocons, a client for the Neo4J REST API
  • Quartzite, a powerful scheduling library

and several others. If you like Meltdown, you may also like our other projects.

Let us know what you think on Twitter or on the Clojure mailing list.

About the Author

Michael on behalf of the ClojureWerkz Team

by The ClojureWerkz Team at April 17, 2014 11:00 PM


How important is a bachelor's in CS for getting a job as a software engineer?

I'm in undergrad right now studying as a math major/cs minor and I want to become a software engineer (the math is for just in case I decide that I don't want to do CS for the rest of my life).

submitted by assoyster
[link] [5 comments]

April 17, 2014 11:00 PM


Computational Power of Neural Networks?

Let's say we have a single-layer feed forward neural network with k inputs and one output. It calculates a function from $\lbrace 0,1\rbrace ^{n}\rightarrow\lbrace 0,1\rbrace $, it's fairly easy to see that this has at least the same computational power as $AC^0$. Just for fun, we'll call the set of functions computable by a single layer neural network "$Neural$".

It seems, however, that it might have more computational power than $AC^0$ alone.

So ... is $AC^0 \subseteq Neural$ or is $Neural = AC^0$? Also has this kind of complexity class been studied before?

by gabgoh at April 17, 2014 10:49 PM


can a post machine have more than one accepting state?

I was searching through google and I couldn't find anything

Can a post machine have more than one accepting state ?

Yes or No ?

by Manual at April 17, 2014 10:38 PM


PWIQF excercise solution

I am software developer with no previous experience or knowledge in finance and have recently been starting to build my knowledge in this area. I am working through the book: Paul Wilmott Introduces Quantitative Finance. I ran into an exercise question that I haven't been able to fully figure and was hoping someone could enlighten me.

The question:

A share currently trades at $60. A European call with exercise price $58 and expiry
in three months trades at $3. The three month default-free discount rate is 5%. A
    put is offered on the market, with exercise price $58 and expiry in three months, for
$1.50. Do any arbitrage opportunities now exist? If there is a possible arbitrage, then
construct a portfolio that will take advantage of it. (This is an application of put-call

I have been able to figure out that there is in fact arbitrage (I think anyway) in this situation using the formula C - P = S - Ee^-r(T - t) which gives a value of 1.5 on the left side and 2.8 on the right. The part I can't figure out is how to construct a portfolio to take advantage of the arbitrage.

Also, if anyone can clarify what it means when C - P is less than the right side of that equation vs. when it is greater than the right side would be very helpful as well.

by mclaassen at April 17, 2014 10:14 PM


I want to make an emulator for a senior this feasible? and where to start?

I graduate in December, and I am interested in emulating some time of game hardware like the NES or Genesis. Is this feasible with the amount of time I have and also programming experience? Of course I have taken many programming courses (data structures and algorithms etc) but I want to challenge myself.

submitted by weallgonnamakeit
[link] [7 comments]

April 17, 2014 10:12 PM


How do I make ido (or something else) recursively search in all subdirectories?

I've gotten very fuzzyfinder in vim which works just like ido's find-file except it also finds files in subdirectories an unlimited number of directories deep. I would be surprised if there wasn't a way to do this in emacs. What do I need?

submitted by joequin
[link] [5 comments]

April 17, 2014 10:12 PM



Here are a few of the many Roll-a-Sketch drawings I did for folks in Seattle a couple weeks ago!!

Roll-a-Sketch drawings, of course, have their elements generated from lists of attributes by the rolling of dice! Like this CROCODILE + WHALE + POLICE + VICTORIAN:

oi ye git


oi ye blighter

Or his buddy/long-lost twin (we decided), ELEPHANT + CACTUS + HELICOPTER + BATMAN:

oi ye penguin

Finally, that old chestnut, the BEAR + ANGEL + SPORTS + CLOWN:

oi ye pitcher

Would you like to get your very own Roll-a-Sketch? I will be at WonderCon tomorrow, in Anaheim!

i wonder

IMPORTANT: I will not be there the whole weekend, and I will not have a table! I’ll be around on Friday the 18th only, to check out the show and visit friends.

BUT: A few times during the day I’ll set up shop for Roll-a-Sketches!

If you will be at WonderCon tomorrow, and you would like me to text you when I’m doing sketches, here is a form:


I’ll only text you tomorrow, up to three times maybe, to let you know where I’ll be — then I’ll delete your number when I leave. Is it weird? I dunno!! GO FOR IT

Just look for this!!

fly, you fools

Will I have a banner fluttering from a flagpole like a royal herald?


by David Malki at April 17, 2014 10:07 PM


How would you correct a GARCH model to deal with non mean reverting volatility?

I am currently attempting to model and forecast volatility of bitcoin but have not been able to find a GARCH model that fits the data appropriately. I've used tick data sampled at 1 hour intervals over a 2 year period and converted it into hourly returns. The best model i have been able to produce so far is an asymmetric garch (3,3) model.

The portmanteau stat is 198.4**

alpha(1)+beta(1) 1.02753

I have tried GARCH-M,EGARCH,TGARCH all up to (3,3). For some reason I cannot specify (p,q) to be any higher than 3? What steps can I take to improve the model further?

Would it be beneficial to account for seasonality or jumps similair to todrov (2011) and andersen and bollerslev(2005)?

Note: limited programming knowledge so would prefer to avoid R, output produced by PCGIVE10.

by JACK3D at April 17, 2014 09:59 PM


Why is this algorithm $O(n^3)?$

In a programming book that I'm currently reading it's stated that

$$\sum\limits_{i=1}^{n}i^2$$ is $O(n^3)$. My understanding was that $i\times i$ is a primitive operation and the complexity would be $O(n)$. What am I missing?

by planetp at April 17, 2014 09:49 PM



Reusing a stub/redef across a speclj context

I'm writing tests for a Clojure app using Speclj. I'm accustomed in BDD to do things like this:

context "some context"
  stub my-function :return true
  it "has behavior one"
    should true my-function
  it "has behavior two"
    should_not false my-function

But in Speclj I can't seem to find an example of how to share the stub across the characteristics, so I'm currently stuck writing code like this:

(describe "this"
  (context "that"
    (it "accepts nil"
      (with-redefs [called-fn (constantly nil)]
          (should= nil (my-fn nil)))))
    (it "accepts 1"
      (with-redefs [called-fn (constantly nil)]
          (should= 100 (my-fn 1))))))

(I realize this is a somewhat contrived example and arguably those assertions could all go under one characteristic, but let's suppose for now that I have good reason to write the code like this.)

I want, however, to just have to stub called-fn once, but moving this up out of the its raises errors because the real called-fn gets called instead of my redef.

Is there a way to reuse redefs (or use Speclj stubs) in Speclj so I'm not stuck pushing them all down inside the characteristics?

by G Gordon Worley III at April 17, 2014 09:35 PM


I need a textbook for machine learning with programming approach [on hold]

I'm taking an online course of machine learning, and I need a good textbook in machine learning better to be with MATLAB applications.

by Tito Tito at April 17, 2014 09:35 PM

Does this game terminate?

Consider the following card game (known in Italy as "Cavacamicia," which may be translated as "stripshirt"):

Two players randomly split in two decks a standard deck of cards. Each player gets one deck.

The players alternate placing down in a stack the next card from their deck.

If a player (A) places down a special card, i.e. a I, II, or III, the other player (B) has to place down consecutively the corresponding number of cards.

  • If in doing so B places down a special card, the action reverses, and so on; otherwise, if B places down the corresponding number of cards but no special card, A collects all the cards that were put down and adds them to their deck. A then restarts the game by placing down a card.

The first player to run out of cards loses the game.

Note: The outcome of the game depends exclusively on the initial partition of the deck. (Which may make this game look a bit pointless ;-)

Question: Does this game always terminate? What if we generalize this game and give any two sequences of cards to each player?

by Emanuele Viola at April 17, 2014 09:34 PM


I need a textbook for machine learning with programming approach [on hold]

I'm taking an online course of machine learning, and I need a good textbook in machine learning better to be with MATLAB applications.

by Tito Tito at April 17, 2014 09:34 PM


An edit distance that counts the number of mutations / deletions / insertions at distance non-contiguous sites

I'm looking for the name, if it exists, of an edit distance-like metric that counts the number of: (1) mutations, or (2) deletions, or (3) insertions of one or more characters at distinct sites (alt. mutations/insertions/deletions at sites that are not contiguous) required to transform one string into another. Just to give it a name, call this edit distance metric DISTINCTLEVENSHTEIN for now.

So, for example:

String 1 = 00111110011100

String 2 = 000000

Here, DISTINCTLEVENSHTEIN would give an edit distance of "2" since "11111" and "111" are contiguous chunks that count as deletions at a single site in string 1 (alt. insertions in string 2).

Here's another example:

String 1 = 00111110011100

String 2 = 002200

Here, DISTINCTLEVENSHTEIN would give an edit distance of "3" since the two "2" characters that need to be mutated back to "0" characters (or vice versa in the other string) are contiguous. However, if we set string 2 to 20200, DISTINCTLEVENSHTEIN and the regular Levenshtein edit distance metric would be yield the output "4".

Is there any way to create DISTINCTLEVENSHTEIN mostly out of already implemented operations for calculating the Levenshtein / Hamming / etc. distances between two strings?

by AintGotNoDipole at April 17, 2014 09:26 PM



Computer Science degree

What are some career options for those with degrees in CS? What are some average salaries for those careers? Thanks!

submitted by ryanando
[link] [4 comments]

April 17, 2014 09:22 PM



What does consistency mean for "computational theories" corresponding to inductive types?

I am currently reading the book by Luo on computation and reasoning. In the book he contrasts inductive types considered as computational theories with axiomatic theories widespread in "standard" mathematics.

However, if we have axiomatic theories, then they can be inconsistent. It seems to be impossible with inductive types. But why this is so?

P.S. As far as I understand from an answer to my previous question "Correctness" of type theory we can create an inductively defined set and consistency of the type theory + inductive type will be equivalent to the consistency of set theory. Am I right?

by Konstantin Solomatov at April 17, 2014 09:00 PM


"Removing" a node from a functional linked list

I'm looking for a function that returns a linked list that doesn't contain a specific node.

Here is an example implementation:

Nil = None                  # empty node

def cons(head, tail=Nil):
    """ Extends list by inserting new value. """
    return (head, tail)

def head(xs):
    """ Returns the frst element of a list. """
    return xs[0]

def tail(xs):
    """ Returns a list containing all elements except the first. """
    return xs[1]

def is_empty(xs):
    """ Returns True if the list contains zero elements """
    return xs is Nil

def length(xs):
    Returns number of elements in a given list. To find the length of a list we need to scan all of its                                                                               
    elements, thus leading to a time complexity of O(n).                                                                                                                              
    if is_empty(xs):
        return 0
        return 1 + length(tail(xs))

def concat(xs, ys):
    """ Concatenates two lists. O(n) """
    if is_empty(xs):
        return ys
        return cons(head(xs), concat(tail(xs), ys))

How can a remove_item function be implemented?

by JCPedroza at April 17, 2014 08:48 PM

Scala-Java Interop: hello world that calls java from scala

how can i import a class from a simple java program in my scala script?

i have:


the scala:

import hello.Hello

new Hello().hi("Scala")

the java:

package hello;

public class Hello {
    public void hi(String caller) {
        System.out.println(caller + " calling Java");
    public static void main(String[] args) {
        (new Hello()).hi("Java");

i have blindly tried a few different permutations (package or no, same dir etc) and i do have my java class, but i keep getting "not found" on the import.

by sam boosalis at April 17, 2014 08:35 PM

Is there any way to use Astyanax DefaultEntityManager from Scala with fields that are Option[T] ?

Up until now, I haven't had much need for this because I've been saving things manually, but some recent work that popped up involves tons of different types and each type has plenty of Option[T] fields (this is all scala). Looking through the source code of, I see the following:

    Field[] declaredFields = clazz.getDeclaredFields();
    columnList = Maps.newHashMapWithExpectedSize(declaredFields.length);
    Set<String> usedColumnNames = Sets.newHashSet();
    Field tmpIdField = null;
    for (Field field : declaredFields) {
        Id idAnnotation = field.getAnnotation(Id.class);
        if(idAnnotation != null) {
            Preconditions.checkArgument(tmpIdField == null, "there are multiple fields with @Id annotation");
            tmpIdField = field;
        Column columnAnnotation = field.getAnnotation(Column.class);
        if ((columnAnnotation != null)) {
            ColumnMapper columnMapper = null;
            Entity compositeAnnotation = field.getType().getAnnotation(Entity.class);
            if (Map.class.isAssignableFrom(field.getType())) {
                columnMapper = new MapColumnMapper(field);
            } else if (Set.class.isAssignableFrom(field.getType())) {
                columnMapper = new SetColumnMapper(field);
            } else if(compositeAnnotation == null) {
                columnMapper = new LeafColumnMapper(field);
            } else {
                columnMapper = new CompositeColumnMapper(field);
                    String.format("duplicate case-insensitive column name: %s", columnMapper.getColumnName().toLowerCase()));
            columnList.put(columnMapper.getColumnName(), columnMapper);

Clearly this just pulls the annotations from each field. Has anyone out there using Astyanax + Scala made a patch for this that is more flexible and would work with Option[T]?

by Adrian Rodriguez at April 17, 2014 08:17 PM


Is it NP-hard to fill up bins with minimum moves?

There are $n$ bins and $m$ type of balls. The $i$th bin has labels $a_{i,j}$ for $1\leq j\leq m$, it is the expected number of balls of type $j$.

You start with $b_j$ balls of type $j$. Each ball of type $j$ has weight $w_j$, and want to put the balls into the bins such that bin $i$ has weight $c_i$. A distribution of balls such that previous condition holds is called a feasible solution.

Consider a feasible solution with $x_{i,j}$ balls of type $j$ in bin $i$, then the cost is $\sum_{i=1}^n \sum_{j=1}^m |a_{i,j}-x_{i,j}|$. We want to find a minimum cost feasible solution.

This problem is clearly NP-hard if there is no restriction on $\{w_j\}$. The subset sum problem reduces to the existence of a feasible solution.

However, if we add the condition that $w_j$ divides $w_{j+1}$ for every $j$, then the subset sum reduction no longer works, so it's not clear whether the resulting problem remains NP-hard. Checking for the existence of a feasible solution takes only $O(nm)$ time(attached at the end of the question), but this does not give us the minimum-cost feasible solution.

The problem has a equivalent integer program formulation: Given $a_{i,j},c_i,b_j,w_j$ for $1\leq i\leq n,1\leq j\leq m$. \begin{align*} \text{Minimize:} & \sum_{i=1}^n \sum_{j=1}^m |a_{i,j}-x_{i,j}| \\ \text{subject to:} & \sum_{j=1}^m x_{i,j}w_j = c_i \text{ for all } 1\leq i\leq n\\ & \sum_{i=1}^n x_{i,j} \leq b_j \text{ for all } 1\leq j \leq m\\ & x_{i,j}\geq 0 \text{ for all } 1 \leq i\leq n, 1\leq j \leq m\\ \end{align*}

My question is,

Is the above integer program NP-hard when $w_j$ divides $w_{j+1}$ for all $j$?

It's not obvious how to solve this even when $n=1$ and $w_j=2^j$, namely \begin{align*} \text{Minimize:} & \sum_{j=1}^m |a_j-x_j| \\ \text{subject to:} & \sum_{j=1}^m 2^j x_j = c\\ & 0 \leq x_j \leq b_j \text{ for all } 1\leq j \leq m\\ \end{align*}

An algorithm to decide if there is a feasible solution in $O(nm)$ time:

Define $w_{m+1}=w_m(\max_{j} c_j + 1)$ and $d_j = w_{j+1}/w_j$. Let $a\%b$ be the remained of $a$ divides $b$.

  1. If there exist a $c_i$ that's not divisible by $w_1$, return "no feasible solution". (the invariant $c_i$ divides $w_j$ will always be maintained in the following loop)
  2. for $j$ from $1$ to $m$:

    1. $k \gets \sum_{i=1}^n (c_i/w_j)\%d_j$. (the minimum of balls of weight $w_j$ required)
    2. If $b_j<k$, return "no feasible solution".
    3. $c_i \gets c_i - ((c_i/w_j)\% d_j)$ for all $i$. (remove the minimum number of required balls of weight $w_j$)
    4. $b_{j+1} \gets \lfloor (b_j-k)/d_j \rfloor$. (group smaller balls into a larger ball)
  3. return "there is a feasible solution".

by Chao Xu at April 17, 2014 08:15 PM


Play2 : compilation error with implicit value for parameter flash

I begin in Play 2 and Scala, and I create a very simple login form. All works perfectly until I use flash to store the login result. All I obtained is the error :

/home/denis/dev/workspace/atabank/app/views/login.scala.html:5: could not find implicit value for parameter flash: play.api.mvc.Flash

According the Play documentation, I put in my controllers implicit request => ..., I use Ok(...).flashing("error" -> "wrong password"), but the error still here...

Here my code :

Login controller :

package controllers

import{ mapping, nonEmptyText }
import play.api.mvc.{ Action, Controller, Flash }

object Login extends Controller {

def login = Action { implicit request =>

def authenticate() = Action { implicit request =>
    val form = loginForm.bindFromRequest()

        hasErrors = {
        form =>
            Redirect(routes.Login.authenticate()).flashing("error" -> "Connection error")
        success = {
        userInfo =>
            if ("123456".equals(userInfo.password))
                Redirect("success" -> "Successfully connected")
                Redirect(routes.Login.login()).flashing("error" -> "Wrong password")


val loginForm: Form[LoginInfo] = Form {
        "login" -> nonEmptyText,
        "password" -> nonEmptyText)(LoginInfo.apply)(LoginInfo.unapply)


case class LoginInfo(login: String, password: String)

login.scala.html :

@()(implicit flash: Flash, lang: Lang)
@import helper._
@import controllers.Login._

@main("Login") {

    <div class="container">

        @helper.form(action = routes.Login.authenticate(), 'role -> "form", 'class -> "form-signin") {
            <h2 class="form-signin-heading">Please sign in</h2>
            <input type="email" name="login" class="form-control" placeholder="Email address" required autofocus>
            <input type="password" name="password" class="form-control" placeholder="Password" required>
            <button class="btn btn-lg btn-primary btn-block" type="submit">Sign in</button>


main.scala.html :

@(title: String)(content: Html)(implicit flash: Flash, lang: Lang)

<!DOCTYPE html>

        <!-- CSS, JS -->
        <div class="container">
            @if(flash.get("success").isDefined) {
                <div class="alert alert-success">
                <div class="alert alert-error">


home controller :

package controllers

import play.api._
import play.api.mvc.{ Action, Controller }
import play.api.mvc.Flash

object Home extends Controller {

    def show(username: String) = Action { implicit request =>


home.scala.html :

@(username: String)(implicit flash: Flash, lang: Lang)

@main("Home page") {
    <h1>Welcome Mister @username</h1>

Thanks for your help.

by Atatorus at April 17, 2014 07:51 PM

How to redefine a key in SBT?

How can you redefine a key in SBT (as opposed to extend or define)?

I currently have the following in my build script (project/build.scala):

fullClasspath in Runtime <<= (fullClasspath in Runtime, classDirectory in Compile) map { (cp, classes) => (cp.files map {
  f: File =>
    if (f.getName == classes.getName) {
      val result = new File(f.getParent + File.separator + "transformed-" + f.getName)
      if (result.exists) result else f
    } else f
}).classpath }

It extends the classpath in Runtime by adding, for each directory in Compile, a new directory with the same name but with transformed- prepended to the front.

(If you are wondering why, I have a plugin which performs program transformation on the bytecode after compilation but before packaging, and selective recompilation gets very confused if you overwrite the original files.)

My problem is the following: This extends the original key, and therefore the classpath contains the original directories from Compile, plus the renamed copies, but I only want the renamed ones from Compile.

I tried to do something along the lines of

fullClasspath in Runtime := ...

but I don't know what to put on the right-hand side.

I've marked the answer since it lead me directly to the solution, but my final solution was to modify the above code snippet to the following

fullClasspath in Runtime := (fullClasspath in Runtime) {
  f: File =>
    if (f.getName == (classDirectory in Compile).value.getName) {
      val result = new File(f.getParent + File.separator + "transformed-" + f.getName)
      if (result.exists) result else f
    } else f

which does exactly what I wanted, and is slightly better style.

by Andrew Bate at April 17, 2014 07:51 PM

Check leap years with Clojure

I'm stuck using the and operator, how can you test for multiple conditions. I am very close but am stuck to solving this with clojure.

(defn leap [year] (cond (and (zero? (rem year 4)) (zero? (rem year 100))) true :else false))

Thank you for your help.

by Zac at April 17, 2014 07:39 PM


The Birth & Death of JavaScript

This starts silly, but about halfway in start talking about low-level concerns and connecting them in insightful ways to the previous setup.


by pushcx at April 17, 2014 07:23 PM


Differences in whether realization of a lazy sequence inside of a lazy sequence occurs

I wondered: What happens when you embed an expression that forces realization of a lazy sequence inside of an outer lazy sequence that's not realized?

Answer: It seems to depend on how you create the outer lazy sequence. If the outer sequence comes from map, the inner sequence is realized, and if the outer sequence comes for iterate, it's not.

Well, I'm pretty sure that that is not the right way to describe what happens below--I'm pretty sure that I'm not understanding something. Can someone explain?

(There is one quirk, which is that while map returns a LazySeq, iterate returns a Cons wrapped around a LazySeq. So in the tests for class and realization below, I look at the rest of the output of iterate. I don't believe that this difference between map and iterate has anything to do with my question.)

(def three-vec (range 1 4))

(defn print-times-10-ret [x]
  (let [y (* 10 x)] 
    (println "[" y "] " ) 

(defn once [xs] (map print-times-10-ret xs))

(defn doall-once [xs] (doall (map print-times-10-ret xs)))

(defn doa-twice [xs] (once (doall-once xs))) ; "doa" since only half doall-ed

;; Here the inner sequence seems to get realized:
(def doa-twice-map (doa-twice three-vec))
; printed output:
; [ 10 ]
; [ 20 ]
; [ 30 ]

;; Here we create a lazy sequence that will call doall-once when
;; realized, but nothing gets realized:
(def doall-once-iter (iterate doall-once three-vec))
; no printed output

(class doa-twice-map)
; => clojure.lang.LazySeq

;; Note that this is not realized, even though the inner seq was realized (?):
(realized? doa-twice-map)
; => false

(class (rest doall-once-iter))
; => clojure.lang.LazySeq

(realized? (rest doall-once-iter))
; => false

by Mars at April 17, 2014 07:10 PM

Play! upgrade error: ')' expected but '}' found

Just upgraded our project to Play! 2.2.1. After fixing dependencies, checking code, and fixing all initial compiler errors I went to launch the project and got the following:

')' expected but '}' found.

It occurs in 3 "view" files pointing to the last line of each file. Those 3 views are structured exactly the same as about 30 other files.

I've tried changing line delimiters, adding spacing to the top, etc. I also ran an sbt clean as well as a play clean. No idea. Here's a sample of an offending file:

Update: I removed the var userId = "%s"; and the format below, and that file compiled. Any ideas how that's related?

@(session:play.api.mvc.Session, socialUser: Option[PlatformUser], userProfile: Option[UserProfile], flash:play.api.mvc.Flash)(implicit request: RequestHeader)

@import platform3.util.time.TimeFormats._

@main(session, models.Community.communitySettings(request), "Thanks for signing up!", socialUser, flash, """

  /* injected javascript */
  var userId = "%s";

  $.getJSON("/balance", function(data) {

""" format socialUser.getOrElse("")) {

  <section id="page-title">
    <div class="container">
      <div class="row">
        <div class="span12 text-center">

  <section class="featured">
    <div class="featured-text hero-img thanks-img">
      <div class="container">
        <h2><span class="featured-text-bg">You're signed up!</span></h2>
      @socialUser match{
        case Some(u) => {
        <h3><span class="featured-text-bg">Hey @u.firstName, we're excited you</span><br>
        <span class="featured-text-bg">could join us. Click below to</span><br>
        <span class="featured-text-bg">see how you can get started.</span>
        case None => {}
        <div class="sc-button-group">
          <a href="/earn" class="btn btn-large btn-rounded btn-theme"><i class="icon-star icon-large"></i> Earn Points</a>
          <a href="/account" class="btn btn-large btn-rounded">My Account</a>

  <section id="content2">
    <div class="container">
      <div class="row margin-30">
        <div class="span9 margin-30">
          <h2 style="color:#626c72" class="margin-20">Your current point balance is <strong><span id="title-points"></span>pts</strong>.</h2>
          <p class="lead">Earning more points is easy! Get involved and earn more rewards quickly.</p>
          <p><b>Visit us</b> in person and sign in to our on-site kiosks.<br>
            <b>Share our content</b> and your experiences across the web and earn more points.</p>
          <p><a href="/earn" class="btn btn-large btn-rounded btn-theme"><i class="icon-star icon-large"></i> Earn Points</a>
            <a href="/account" class="btn btn-large btn-rounded">View My Account</a></p>
          <p><small>Maximum of 2,500 points can be earned per month per person. Points must be redeemed before midnight of December 31st of each year.</small></p>
        <div class="span3 margin-30">
          <p>Earn even more points by connecting your social accounts.</p>
          <p><a href="/connect/facebook"><img src="/assets/img/connect-facebook.jpg" /></a></p>
          <p><a href="/connect/twitter"><img src="/assets/img/connect-twitter.jpg" /></a></p>
          <p><a href="/connect/instagram"><img src="/assets/img/connect-instagram.jpg" /></a></p>


Really appreciate your help on this.

by crockpotveggies at April 17, 2014 06:58 PM


Planet Theory

TR14-055 | Communication Complexity of Set-Disjointness for All Probabilities | Mika Göös, Thomas Watson

We study set-disjointness in a generalized model of randomized two-party communication where the probability of acceptance must be at least alpha(n) on yes-inputs and at most beta(n) on no-inputs, for some functions alpha(n)>beta(n). Our main result is a complete characterization of the private-coin communication complexity of set-disjointness for all functions alpha and beta, and a near-complete characterization for public-coin protocols. In particular, we obtain a simple proof of a theorem of Braverman and Moitra (STOC 2013), who studied the case where alpha=1/2+epsilon(n) and beta=1/2-epsilon(n). The following contributions play a crucial role in our characterization and are interesting in their own right. (1) We introduce two communication analogues of the classical complexity class that captures small bounded-error computations: we define a "restricted" class SBP (which lies between MA and AM) and an "unrestricted" class USBP. The distinction between them is analogous to the distinction between the well-known communication classes PP and UPP. (2) We show that the SBP communication complexity is precisely captured by the classical corruption lower bound method. This sharpens a theorem of Klauck (CCC 2003). (3) We use information complexity arguments to prove a linear lower bound on the USBP complexity of set-disjointness.

April 17, 2014 06:49 PM


Get extended properties from video file using PHP

I have a test.asf file that I'd like to get the 'title' property from. There's also some other properties in there that would be nice to access, like 'comments' and 'length'.

Currently I'm getting those properties by checking the file in Windows; my script is running on a FreeBSD server.

Is this possible using just PHP? Anyone have experience with this, possible using an external tool/script that can be called from PHP?

I already tried:


But that just returned an error. Also, from the docs it doesn't seem like it's what I need.

On a final note, if there's no tool readily available, perhaps it's possible of converting the binary data and trying to look up properties myself, using some kind of low-level code?

EDIT: I tried the GetId3 lib, but that will only return me the mime-type and a warning:

"ASF header GUID {75B22630-668E-11CF-A6D9-00AA0062CE6C} does not match expected "GETID3_ASF_Header_Object" GUID {00000000-0000-0000-0000-000000000000}"

by Oli at April 17, 2014 06:49 PM



Equivalent of Python's Pass in Scala

If there is a function that you don't want to do anything with you simple do something like this in Python:

def f():

My question is, is there something similar to pass in Scala?

by Games Brainiac at April 17, 2014 06:43 PM

spray-json providing JsonFormats for case class

I have simple json as:

    "host" : "localhost",
    "port" : 3000

I have read it and parse with:

val config_content = Source.fromFile(config).mkString

Now i want to map it in my config class, i have:

package utils

import spray.json._
import DefaultJsonProtocol._

case class Config(host : String, port : Int)

object MyJsonProtocol extends DefaultJsonProtocol {
  implicit val configFormat = jsonFormat2(Config)

But when i'm trying to compile it i'm getting foloowing error:

config_content.parseJson Cannot find JsonReader or JsonFormat type class for utils.Config

How to do it correctly?

by 0xAX at April 17, 2014 06:42 PM



JsonProperty annotation does not work for Json parsing in Scala (Jackson/Jerkson)

I need to parse the following json string:

{"type": 1}

The case class I am using looks like:

case class MyJsonObj(
    val type: Int

However, this confuses Scala since 'type' is a keyword. So, I tried using @JsonProperty annotation from Jacson/Jerkson as follows:

case class MyJsonObj(
    @JsonProperty("type") val myType: Int

However, the Json parser still refuses to look for 'type' string in json instead of 'myType'. Following sample code illustrates the problem:

import com.codahale.jerkson.Json._
import org.codehaus.jackson.annotate._

case class MyJsonObj(
    @JsonProperty("type") val myType: Int

object SimpleExample {
  def main(args: Array[String]) {
    val jsonLine = """{"type":1}"""
    val JsonObj = parse[MyJsonObj](jsonLine)

I get the following error:

[error] (run-main-a) com.codahale.jerkson.ParsingException: Invalid JSON. Needed [myType], but found [type].

P.S: As seen above, I am using jerkson/jackson, but wouldn't mind switching to some other json parsing library if that makes life easier.

by gjain at April 17, 2014 06:35 PM


Twitter #DataGrants selections

In February, we introduced the Twitter #DataGrants pilot program, with the goal of giving a handful of research institutions access to Twitter’s public and historical data. We are thrilled with the response from the research community — we received more than 1,300 proposals from more than 60 different countries, with more than half of the proposals coming from outside the U.S.

After reviewing all of the proposals, we’ve selected six institutions, spanning four continents, to receive free datasets in order to move forward with their research.

Thank you to everyone who took part in this pilot. As we welcome Gnip to Twitter, we look forward to expanding the Twitter #DataGrants program and helping even more institutions and academics access Twitter data in the future. Finally, we’d also like to thank Mark Gillis, Chris Aniszczyk and Jeff Sarnat for their passion in helping create this program.

April 17, 2014 06:34 PM





m2k14: Hackathon Begins

As is their wont, a number of developers have congregated for another hackathon, this time in sunny Morocco.

You can, of course, follow the commits on source-changes, but the war cries that lead us down the road to Valhalla are being collected for your inspiration and amusement at OpenSSL Valhalla Rampage.

As always, it is your donations that make it possible for our berserkers to greet the Valkyries!

April 17, 2014 05:52 PM

Planet Theory

STOC 2014 announcement.

Howard Karloff writes in to remind everyone that the STOC 2014 early registration deadline is coming up soon (Apr 30 !). Please make sure to register early and often (ok maybe not the last part). There will be tutorials ! workshops ! posters ! papers ! and an off-off-Broadway production of Let It Go, a tragicomic musical about Dick Lipton's doomed effort to stop working on proving P = NP.

At least a constant fraction of the above statements are true.

And if you are still unconvinced, here's a picture of Columbia University, where the workshops and tutorials will take place:

by Suresh Venkatasubramanian ( at April 17, 2014 05:37 PM



Phonebloks is a vision for a phone worth keeping. We want a modular phone that can reduce waste, is built on an open platform and made for the entire world. We are keen on finding the right partners and people to build this phone. We set up an online platform where you can share your thoughts, ideas and feedback. We believe that together we can make the best phone in the world.


by webjay at April 17, 2014 05:36 PM



How do you block a thread until a condition becomes true?

In Clojure, how do you block a thread (future) until a condition becomes true? Or, alternatively, perhaps keep retrying until a condition becomes true? This is easy when you have condition variables, but I'm not sure what's the Clojure way to do this.

To be more specific, I have a shared variable that is accessible by many futures at the same time. A future should do the following:

  1. Check the state of the variable.
  2. If the state meets a certain condition, update it to a new state.
  3. If the state does not meet the condition, the future should block or retry, until the condition is met (by another thread modifying the state).

by Derek Chiang at April 17, 2014 05:27 PM


Theoretical concepts with maximum practical implication

What made me to ask this question is things I have learned in graph theory. I was reading about centrality and found that betweenness centrality is very important in social networks, Whether there exist any other concepts with similar impact?

You could close this question by stating that

  1. This is not a theoretical question
  2. All concepts have practical implications

I am looking for concepts like above which is simple but have very important practical implications.

by user3162 at April 17, 2014 05:24 PM


Specifying Scalac Compile-Time Option with maven-scala-plugin

Using the maven-scala-plugin 2.15.2, I'm trying to specify the max length of a Scala Class File to "50" characters. I tried 2 places in my pom.xml:

                                <scalacArgs>   <-- Attempt #1 -->
                                    <scalacArg>-Xmax-classfile-name 50</scalacArg> 
<!-- Attempt #2 -->


I also ran mvn compile -Xmax-classfile-name 50, but Maven did not recognize the option and failed.

Where can I specify this option using the maven-scala-plugin?

by Kevin Meredith at April 17, 2014 05:19 PM



Help! I'm not sure how to read external files and randomize what comes from it in C++!

In computer science, we are working on a final project for the end of the year and I am working on a program that generates all the statistics of each player in the NBA for the regular 2013-2014 season and also quotes from players of the NBA. Right now I have two problems. The first is that I'm not sure how to read in quotes from an external file and randomize it. The second one is that I need to also read in the stats of a certain player (which the user prompts for) from an external file and how will I able to get the stats for a certain person in a whole file with other player's stats in it. I'm not really the best at coding so if you could explain with simplicity, that would be great! Thank you so much! :)

P.S. Also, we are coding in C++ using Code Warrior and if you need to see the code I have so far, just ask!

submitted by 547008
[link] [comment]

April 17, 2014 05:07 PM



How to flatten a collection with Spark/Scala?

In Scala I can flatten a collection using :

val array = Array(List("1,2,3").iterator,List("1,4,5").iterator)
                                                  //> array  : Array[Iterator[String]] = Array(non-empty iterator, non-empty itera
                                                  //| tor)

    array.toList.flatten                      //> res0: List[String] = List(1,2,3, 1,4,5)

But how can I perform similar in Spark ?

Reading the API doc there does not seem to be a method which provides this functionality ?

by blue-sky at April 17, 2014 05:06 PM



Weil nachts um zwei auf einer Überwachungskamera ein ...

Weil nachts um zwei auf einer Überwachungskamera ein Teenager dabei beobachtet wurde, durch den Zaun in ein Trinkwasserreservoir zu pinkeln, spült Portland jetzt das Reservoir lieber komplett weg. Das sind über 100 Millionen Liter Wasser.

April 17, 2014 05:01 PM

Hat zufällig jemand eine der 2300 Reagenzgläser mit ...

Hat zufällig jemand eine der 2300 Reagenzgläser mit SARS gefunden, die beim Pasteur-Institut beim Inventar als vermisst aufgefallen sind?

April 17, 2014 05:01 PM

Aus Furcht vor "Cyberterroristen" untersucht das BKA ...

Aus Furcht vor "Cyberterroristen" untersucht das BKA jetzt 3d-Drucker. Ich persönlich fürchte mich ja mehr vor dem BKA als vor irgendwelchen nebulösen angeblichen Terroristen.

April 17, 2014 05:01 PM

Aktuelles Highlight aus dem OpenSSL-Cleanup von OpenBSD:Do ...

Aktuelles Highlight aus dem OpenSSL-Cleanup von OpenBSD:
Do not feed RSA private key information to the random subsystem as
entropy. It might be fed to a pluggable random subsystem....

What were they thinking?!

Äh ja, die Frage stellt sich. Ich stelle mir das ungefähr so vor:
Hey, Cheffe, wir brauchen hier mal Entropie für den Zufallszahlengenerator!

Was sagst du? Krypto-Schlüssel haben eine hohe Entropie?

OK, Cheffe, knorke, wird gemacht!


April 17, 2014 05:01 PM

Cooles Projekt: "Open Source Seeds". Das sind Pflanzensamen ...

Cooles Projekt: "Open Source Seeds". Das sind Pflanzensamen mit einer offenen Lizenz, die insbesondere die freie Weitergabe garantiert, und dass sie nicht unter Patente fallen. Und sogar mit viraler Klausel ala GPL!

April 17, 2014 05:01 PM



Direct reduction from Circuit SAT to NAE-3-SAT

I know how to reduce $Circuit-SAT$ problem to $3-SAT$ and thereafter to reduce $3-SAT$ to $NAE-4-SAT$ and finally $NAE-3-SAT$. What I do is that I rewrite the circuit to comprise only of NAND (which is pretty easy), and then I write an equivalent Boolean function of a NAND gate: assume $a$ and $b$ are inputs and $c$ is the output. the Boolean function which shows consistency of values of $a, b, c$ is: $f(a,b,c) =(\lnot a\lor \lnot b\lor \lnot c) \land (a\lor c) \land (b\lor c)$

if we replace each gate with its equivalent function and also add output variable $o$ to the conjunction, then satisfying the whole boolean formula is equivalent to satisfying the circuit problem. The only remaining issue is that some are less than two conjonction, which can be resolved by adding false variables $z_1, z_2$ to conjunctions with less than 3 literals. We can simply guarantee falseness of $z_1$ and $z_2$ by adding conjunctions $(z_1 \lor x \lor y) \land (z_1 \lor \lnot x \lor y) \land (z_1 \lor x \lor \lnot y) \land (z_1 \lor \lnot x \lor \lnot y)$ and the same for $z_2$, in which $z_1, z_2$ are arbitrary variables.

What remains is to show that this 3-SAT problem can be reduced to NAE-3-SAT. The solution is to intorduce another new variable $v$ for the whole function and replace conjunction $(x_i \lor x_j \lor x_k$ with $(y_i, y_j, y_k,v)$ in which we define $y_i$ as true if only if $x_i \neq v$. It is straight forward to show that NAE condition for this tuple is equivalent to 3-SAt condition for $x_i$s. Finally, we repplace $(y_i,y_j,y_k,v)$ by $(y_i, y_j, w) \land (y_k, v, \lnot w)$ for an arbitrary variable $v$.

But I am seeking a more direct reduction from $C-SAT$ to $NAE-3SAT$. Is there any idea on how to further simplify this procedure?

by AmeerJ at April 17, 2014 04:59 PM


play command doesn't work when inside of an existing sbt project

So I have an existing sbt project setup:


Now when inside this folder, I want to create a play application called 'web'.

I get this error:

error] Not a valid command: new (similar: set)
[error] Not a valid project ID: new
[error] Expected ':' (if selecting a configuration)
[error] Not a valid key: new (similar: name, run, runner)
[error] new
[error]    ^

The play command works just fine if I try the same thing in a new folder.

How can I get the play command to work inside of an existing sbt project?

I'm using sbt .13

by Blankman at April 17, 2014 04:43 PM

Reverse lookup nested attributes in scala

I have the following code that to me is very ugly. Is there a cleaner way to write this code?

  type Bar = String
  case class Foo(bars: List[Bar])

  def groupByBar(foos: Seq[Foo]) = (for {
    foo <- foos
    bar <- foo.bars
  } yield bar -> foo).
    groupBy {case (bar, foo) => bar}.
    map {case (bar, foos) => bar -> {_._2}}

by ekaqu at April 17, 2014 04:23 PM

Fluentd with Terms and Kibana

am passing some JSON records through Fluentd, into ElasticSearch, for searching using the Kibana front-end. The issue I have is that terms like "Web Application" are being split into two distinct ones ie. "Web" and "Application". I have read that a new mapping can be added so tried to do something like

curl -XPUT 'http://localhost:9200/_all/_mapping' -d '
"category" : {
"type" : "multi_field",
 "fields" : {
      "category": {"type" : "string", "analyzer" : "standard"},
      "not_analyzed" : { "type" : "string", "index" : "not_analyzed" }

so that they field does not get analyzed but it did not work at just throws the error message {"error":"ActionRequestValidationException[Validation Failed: 1: mapping type is missing;]","status":500}.

Would be grateful for some help on how to do it please ? am guessing that will also need to re-index somehow afterwards ?


by UxBoD at April 17, 2014 04:21 PM


Is there a general-case sweep line algorithm for line segment intersection?

I'm looking for a sweep line algorithm for finding all intersections in a set of line segments that doesn't necessarily respects the general position constraint of Bentley-Ottman's algorithm (taken from Wikipedia):

  • No two line segment endpoints or crossings have the same x-coordinate
  • No line segment endpoint lies upon another line segment
  • No three line segments intersect at a single point.

Is there any sweep line solution to this problem? If not, is there any other algorithm that solves this problem in O((n+k)log(n))?

by Caio Oliveira at April 17, 2014 04:19 PM


Java 8 and Scala

This Java 8 vs Scala: a Feature Comparison article on InfoQ, very well summarizes the similarities between the upcoming Java 8 and Scala.

Given the improvements to Java 8, what are the features in Scala that would still motivate me to adopt Scala as a language?

Edit: I use Java 7

by Nerrve at April 17, 2014 04:19 PM



Does this.synchronized protects your from the garbage collector?

I have some code that uses WeakReference. I had to implement an ugly workaround to solve the problem, but I wonder if just adding a this.synchronized might solve my problem with the garbage collector. Here is the code, the problem is in the function create

   * The canonical map.
  var unicityTable = new WeakHashMap[CanonicalType, LightWeightWrapper[CanonicalType]] with SynchronizedMap[CanonicalType, LightWeightWrapper[CanonicalType]]

   * Create a new element from any object.
   * @param from the object that will be used to generate a new instance of your canonical object.
  def create(from: FromType) = {
    val newElt = makeFrom(from)
      // I wonder if adding a this.synchronized here (and of course removing the test for garbage collection) might solve the problem more elegantly 
      val wrapper = unicityTable.get(newElt)
      wrapper match {
        case Some(w) => w._wrap.get match { // if the element is in the map
          case Some(element) => element // the element was not garbage collected while we obtaining it
          case None => // some how, the wrapped element was garbage collected before we get it, so we recreate it, and put it back
            unicityTable.put(newElt, LightWeightWrapper(newElt))
        case None => // the element is not in the map
          unicityTable.put(newElt, LightWeightWrapper(newElt))


  class LightWeightWrapper[T <: AnyRef] private (wrap: T) {

    val _wrap = new WeakReference(wrap)

    def wrapped = _wrap.get match {
      case Some(w) => w
      case None => // should never happen 
        throw new IllegalStateException

    override lazy val hashCode = wrapped.hashCode

    override def equals(o: Any): Boolean = o match {
      case LightWeightWrapper(w) => w eq this.wrapped
      case _ => false

So the question is: Does garbage collection stop during the execution of a synchronized block?

by Edmundo López Bóbeda at April 17, 2014 04:04 PM

Consequences of a Parent Child Relationship in akka

case object ChildMessage

implicit val as = ActorSystem()

class Child extends Actor {
  def receive = {
    case ChildMessage => println("I'm a child")

class ParentWithExplicitChildren extends Actor {
  val children = Array.fill(5)(context.actorOf(Props[Child]))
  def receive = {
    case ChildMessage => children.foreach(_ ! ChildMessage)
    case _ => println("I'm a parent")

class ParentWithActorRefs extends Actor {
  val shamChildren = Array.fill(5)(as.actorOf(Props[Child]))
  def receive = {
    case ChildMessage => shamChildren.foreach(_ ! ChildMessage)
    case _ => println("I'm a parent")

val parent = as.actorOf(Props[ParentWithExplicitChildren])
parent ! ChildMessage
// Will shut down children
parent ! PoisonPill

val shamParent = as.actorOf(Props[ParentWithActorRefs])
shamParent ! ChildMessage
// WONT shut down children
shamParent ! PoisonPill

Using the example above I can only think of two consequences of not having an explicit Parent Child relationship.

  1. Poison Pill won't explicitly kill the actor refs contained in ParentWithActorRefs
  2. ParentWithActorRefs's context.children will be empty

Are they other consequences? Does the non-child message relaying potentially have different message ordering semantics than the child message relaying? Can I not access ParentWithExplictChildren's child actor refs with actorSelection?

by Andrew Cassidy at April 17, 2014 03:56 PM


What do you recommend for assmebly function parameter sending?

I have a RISC processor with 7 general purpose registers available. The 7th one is my Stack pointer. I was wondering is there any reason why people use registers to send parameters to functions? I always found the push/pull method much cleaner. Are there any actual speed benefits? (I know it's faster to get registers than memory, but still it seems like a lot of hassle)

submitted by mislav111
[link] [5 comments]

April 17, 2014 03:53 PM


Eclipse Scala IDE: How to build standalone Scala app?

I am writing an app that uses AnormCypher (Cypher-oriented Scala library for Neo4j Server I write my code in Eclipse Scala IDE. Using sbteclipse plugin I have imported AnormCypher sbt project into Eclipse. Next I have added it to Java build pass as external project. Everything compiles and works from Eclipse now.

Question: How in Eclipse build a standalone Scala program with all necessary dependencies including external Scala project imported in Eclipse?

Trying to create'executable jar' from Eclipse does not work in this case because to do so Eclipse requests "Select a 'Java Application' launch configuration to use to create a runnable JAR." Alas Eclipse here has no idea about Scala launch configuration.

by DarqMoth at April 17, 2014 03:48 PM



Planet Clojure

Clojure Procedural Dungeons

Foreword When making games, there are two ways to make a dungeon. The common method is to design one in the CAD tool of our choice (or to draw one in case of 2D games). The alternative is to automatically generate random Dungeons by using a few very powerful algorithms. We could automatically…

by Noobtuts at April 17, 2014 03:29 PM


FreeBSD 9.2: how do I install libcurl with openssl?

On Ubuntu I use this command:

apt-get install libcurl4-openssl-dev

On FreeBSD, I've tried this, which doesn't work:

pkg_add -r libcurl4-openssl-dev

I've tried looking through the list of ports here, and didn't see anything obvious:

What's the equivalent install package for FreeBSD (9.2 specifically)?

by Alan at April 17, 2014 03:27 PM

Dave Winer

WordPress to OPML, working!

Fargo 1.54 is out.

It's the first version with the ability to download a WordPress site as a single Fargo outline.

I will release the source code for this project. It's written in JavaScript, runs in node.js. The format is OPML, but the server app could put out any format you like, with modifications of course. I want to do a some testing before releasing it.

As a demo, here's the OPML source for the Rebooting the News site.

This release has no user interface largely because I'm not sure what kind of UI it should have. I want to see how it works for users first before nailing that down.


Since this feature might also be of interest to WordPress users (and developers) who are not using Fargo, feel free to ask questions in the comments below. I use WordPress too, so I'm interested in comments from other users.

Update -- source release

As promised, I've released the source for the server.

April 17, 2014 03:25 PM


Package contains object and package with same name

I am having problems compiling some Scala with Maven or Eclipse where I try to import a class from a Java jar which contains both a namespace and class of the same name.
I can compile with scalac, however.

E.g. the Java project (jar) contains:



-> foobar.jar

Scala project references foobar.jar



class foobartest {


The compiler complains with:

package foo contains object and package with same name: bar 
one of them needs to be removed from classpath

Using Maven 3.0.03/Eclipse 3.7.1 with Scala (and maven-scala-plugin).

The jar which I am having problems with is jenkins-core-1.399.jar - it definitely contains several instances where there is a namespace and object of the same name.
I am attempting to write a Jenkins plugin in Scala (I could do this in Java but would prefer scala since all of our libraries are in scala), which is dependent on using Maven -

by David at April 17, 2014 03:23 PM



How to check if element is null or empty in a field constructor?

Similar to hasErrors

<div class="form-group @if(elements.hasErrors) {error}">

    @** this does not work **@
    @if(!elements.label.isEmpty && elements.label != null)
        <label for="">@elements.label</label>


    @** this does not work **@
    @if(!elements.infos.isEmpty && elements.infos != null) {
      <p class="help-inline">@elements.infos.mkString(", ")</p>

    @if(elements.hasErrors) {
        <p class="help-inline">@elements.errors.mkString(", ")</p>

Even checking for an empty string would be fine. I can't figure out how to do either.

In the view:

    'placeholder -> "email or username",
    '_help -> "",
    '_label -> null,
    'class -> "form-control"

I am using Play Framework 2.2.x

by mawburn at April 17, 2014 03:14 PM



in scala what is the A in sum[B >: A](implicit num: Numeric[B]): B

I see this method in scala List.sum

sum[B >: A](implicit num: Numeric[B]): B

now I understand that it expects any num argument to be implicitly converted to Numeric[B] which means its of typeclass Numeric However what I don't understand is what is this A doing there if the implementation block does not refer to it at all.

the return value is B and the implementation is


and num is also of type Numeric[B] so if return value does not refer to A and implementation does not refer to A why is it needed?

by Jas at April 17, 2014 03:08 PM


People with a degree in Computer Science, what do you do now?

I'm studying computer science, and I want to know what my options are after college. Please specify what degree(s) you have, your job now, and the tasks/ fields of computer science involved in it. Any information is helpful!

submitted by NameOfThyUser
[link] [333 comments]

April 17, 2014 03:05 PM


Clojure in Action, Ch 12 Data Analysis example, dependency issues

I am working through the first edition of this book and while I enjoy it, some of the examples given seem out-dated. I would give up and find another book to learn from, but I am really interested in what the author is talking about and want to make the examples work for myself, so I am trying to update them as I go along.

The following code is a map/reduce approach to analyzing text that depends on clojure.contrib. I have tried changing the .split function to re-seq with #"\w+", used line-seq instead of read-lines, and changed the .toLowerCase to string/lower-case. I tried to follow my problems to the source code and read the docs thoroughly to learn that the read-lines function closes after you consume the entire sequence and that line-seq returns a lazy sequence of strings, implementing The most helpful thing for my problem was post about how to read files after clojure 1.3. Even still, I can't get it to work.

So here's my question: What dependencies and/or functions do I need to change in the following code to make it contemporary, reliable, idiomatic Clojure?

First namespace:

(ns chapter-data.word-count-1

(defn parse-line [line]
  (let [tokens (.split (.toLowerCase line) " ")]
    (map #(vector % 1) tokens)))

(defn combine [mapped]
  (->> (apply concat mapped)
       (group-by first)
       (map (fn [[k v]]
              {k (map second v)}))
       (apply merge-with conj)))

(defn map-reduce [mapper reducer args-seq]
  (->> (map mapper args-seq)

(defn sum [[k v]]
  {k (apply + v)})

(defn reduce-parsed-lines [collected-values]
  (apply merge (map sum collected-values)))

(defn word-frequency [filename]
  (map-reduce parse-line reduce-parsed-lines (read-lines filename)))

Second namespace:

(ns chapter-data.average-line-length

(def IGNORE "_")

(defn parse-line [line]
  (let [tokens (.split (.toLowerCase line) " ")]
    [[IGNORE (count tokens)]]))

(defn average [numbers]
  (/ (apply + numbers)
     (count numbers)))

(defn reducer [combined]
  (average (val (first combined))))

(defn average-line-length [filename]
  (map-reduce parse-line reducer (read-lines filename)))

But when I compile and run it in light table I get a bevy of errors:

1) In the word-count-1 namespace I get this when I try to reload the ns function after editing:

java.lang.IllegalStateException: spit already refers to: #' in namespace: chapter-data.word-count-1

2) In the average-line-length namespace I get similar name collision errors under the same circumstances:

clojure.lang.Compiler$CompilerException: java.lang.IllegalStateException: parse-line already refers to: #'chapter-data.word-count-1/parse-line in namespace: chapter-data.average-line-length, compiling:(/Users/.../average-line-length.clj:7:1)

3) Oddly, when I quit and restart light table, copy and paste the code directly into the files (replacing what's there) and call instances of their top level functions the word-count-1 namespace runs fine, giving me the number of occurrences of certain words in the test.txt file but the average-line-length name-space gives me this:

"Warning: *default-encoding* not declared dynamic and thus is not dynamically rebindable, but its name suggests otherwise. Please either indicate ^:dynamic *default-encoding* or change the name. (clojure/contrib/io.clj:73)...

4) At this point when I call the word-frequency functions of the first namespace it returns nil instead of the number of word occurrences and when I call the average-line-length function of the second namespace it returns

java.lang.NullPointerException: null
            core.clj:1502 clojure.core/val

by kurofune at April 17, 2014 02:52 PM



How to map Postgres bigint type in Slick 2.0.0

On one of my project, I am trying to use Slick 2.0.0 for persisting data in db. Actually, I already have an existing Posgresql 9.1 DB but I would like to use Slick from now on : so I have (tried at least) mapped the table schema with what I thought should be the Slick equivalent.

For example, here is the Category table schema with its model object implemented as a case class and its slick mapping configuration:

Table schema

CREATE TABLE categories
  id bigint NOT NULL,
  deletiontime bigint,
  name character varying(255),
  CONSTRAINT categories_pkey PRIMARY KEY (id)

Slick mapping

case class Category(id: Option[Long],
                        deletiontime: BigInt,
                        name: String)

class Categories(tag: Tag) extends Table[Category](tag, "categories") {
  def id = column[Long]("id", O.PrimaryKey, O.AutoInc)
  def deletiontime = column[BigInt]("deletiontime")
  def name = column[String]("name")
  def * : ProvenShape[Category] = (id.?, deletiontime, name) <> (Category.tupled, Category.unapply)

When running my app though, the compiler complains that the mapping that I am using is not correct :

Error:(20, 36) could not find implicit value for parameter tm: scala.slick.ast.TypedType[BigInt]
def deletiontime = column[BigInt]("deletiontime")

well obviously, it seems that it can't convert the BigInt type.

Can someone help please ?

# Current config
[~] scala -version
Scala code runner version 2.10.1 -- Copyright 2002-2013, LAMP/EPFL

[~] psql --version                                                                       16:06:11 
psql (PostgreSQL) 9.1.13
contains support for command-line editing

by kaffein at April 17, 2014 02:09 PM


Why is this issue of Dynamo, the storage system of Amazon, so sensitive? [on hold]

In section 6.3 (Divergent Versions: When and How Many?) of the seminal paper about Dynamo, the storage system of Amazon@SOSP'07, the authors remarked that

Experience shows that the increase in the number of divergent versions is contributed not by failures but due to the increase in number of concurrent writers. The increase in the number of concurrent writes is usually triggered by busy robots (automated client programs) and rarely by humans. This issue is not discussed in detail due to the sensitive nature of the story.

It has roused my curiosity successfully. Therefore, what is the sensitive story, in your opinion? And why is it so sensitive?

by hengxin at April 17, 2014 02:05 PM


Error occurs while calling Scala code in Java in Eclipse

I would like to call the following Scala code in Java:

Scala code

package calculate


class CalculationScala

object CalculationScala {
  def main(args: Array[String]) {
    def operate(a: Double, b: Double, op: (Double, Double) => Double): Double = (op(a, b))
    println(operate(5, 15, _-_))

Java code

package calculate;

public class Calculation {
    public static void main(String[] args) {
        CalculationScala calculationScala = new CalculationScala();

but the following error occurs.


Exception in thread "main" java.lang.NoClassDefFoundError: scala/Function2
    at calculate.CalculationScala.main(CalculationScala.scala)
    at calculate.Calculation.main(
Caused by: java.lang.ClassNotFoundException: scala.Function2
    at Method)
    at java.lang.ClassLoader.loadClass(
    at sun.misc.Launcher$AppClassLoader.loadClass(
    at java.lang.ClassLoader.loadClass(
    ... 2 more

How to solve this issue?

by user2777965 at April 17, 2014 01:54 PM

Dynamic table structure generation in Scala Slick 2

I have a service which saves log data coming from a multiple clients. There might be millions of entries. One log feed may have structure like date,ip,hostname another date,event,subtype,ip,hostname,clientId

Each time when client registers in our service he provides his log map. I have to create new table for him based on map he provided. So I want to be able to generate in a run-time.

case class LogMap(...based on log map...)

class LogVault(tag: Tag, storage: String) extends Table[LogMap](tab, Some("data_vault"), storage) {

Is it even possible? And maybe somebody can give some ideas of how to achieve my goal.

Thank you

by Vladislav Miller at April 17, 2014 01:50 PM


Initial population for a genetic algorithm from one individual

I'm trying to use GA solve the quadratic assignment problem (QAP). We're planning on using it to be able to provide good solutions when using branch and bound becomes impossible, and as a requirement, I have to make it work as if it "improved" an existing solution.

The problem is of course in generating the initial population. I want to ensure diversity as well as good fitness, but the starting individuals, somehow, have to come from a single input individual (it's going to be a good one, in terms of fitness).

How should I go about this? I've thought about creating an initial population consisting on half (or some proportion) of individuals similar to the initial one, and then adding the other half of new randomly (using some heuristic) generated individuals, to combine my solution with other parts of the solution space. Is this a good approach? If so, anny recommendations on the random heuristic to search for the new random individuals?

by Setzer22 at April 17, 2014 01:49 PM


call method overrided in scala

Why the folowing code print the string in the override method instead of the original method which should print "test"? I already use parent to point to self and use parent to call the method

class myClass {
    def method(a:Int):String={
    def run={

val testClass=new myClass {
     override def method(a:Int):String={

by Daniel Wu at April 17, 2014 01:48 PM

Best way to execute concurrent tasks with dependencies

I have several tasks I need to execute. Some have depedencies on other tasks. Is there anything in scala.concurrent (or other libraries) that would make this easier?

For example, there are four tasks, A, B, C, and D. Task A depends on nothing, task B depends on tasks A and D, task C depends on A and B, and task D depends on nothing. Given a thread pool, do as much as possible as soon as possible.

This is similar to what Make or sbt can do with parallelizing tasks, using a dependency graph. (But neither of these are good fits, since I am not building; I am executing application logic which benefits from concurrent execution.)

by Paul Draper at April 17, 2014 01:42 PM


full tick and retail tick data feed difference

Full tick institutional data feed like elektron from reuters, how is it different from retail tick data feed & which charting softwares work with elektron data feed

by user7832 at April 17, 2014 01:38 PM

Dave Winer

How the cloud should work

There are lots of clouds. The one I'm thinking of is the one that Amazon runs, and that so many are trying to catch up to. Here's a list of ideas the next-layer-up will have.

  1. The basic unit shouldn't be a CPU, it should be an app. They are like the apps that run on your iPhone, but they run on a server, not on a hand-held device.

  2. I sign on to my account and see a list of my apps.

  3. To create a new app, click a button called New App. A dialog appears, asking what template I want to use (by template I mean GitHub repository, but that's something designers/programmers worry about). A configuration dialog appears, one created by the template designer. Checkboxes, text areas etc. These set the environment variables that configure the app.

  4. Storage is handled in a simplified S3-like store, without the quirks of S3. Static, web-accessible storage. Nicely configurable with user dialogs. Works like a file system and a web server. And it has an interactive mode that works like a file system (after all these years S3 still doesn't have one, because it's basically not possible).

  5. Double-click on an app to get a readout of what it's doing. Something like the Google Analytics dashboard pops up. I can see what kind of traffic it's getting. Look at how it's doing with its resources. Is a database filling up? Is response time okay? Has it been up continuously. This is what people who run server apps want to be able to see at a glance.

  6. If I want to add more resources, steal the slider from Heroku (see the video demo). The more resources the app uses, the more you pay. In times of peak load scale it up. When things quiet down, you can slide it back down.

This is what the simplified layer will look like. We can build so much more complex stuff when the basics that bog down deploying and maintaining servers gets simplified and commoditized.

April 17, 2014 01:19 PM


source for yahoo finance equities volume traded

I am looking at some academic studies regarding volume of stock traded. Yahoo Finance is used as the data source for volume. Does anyone know where the volume figure comes from? Is it a compilation of feeds from a number of exchanges? Or just a couple of exchanges? Assuming that the volume reported by Yahoo Finance is a subset of total volume traded, does anyone have an opinion as to whether or not the figure is representative of what actually trades in a particular day?

by SCallan at April 17, 2014 01:16 PM

Usage of Brownian Bridge?

I was recommended to read something about Brownian Bridge. Could someone familiar with BB give some recommendation?

It was mentioned that BB benefits in 2 places

  1. BB could reduce the simulation paths, this reduces computation effort, especially when the underlying factors are a lot (say 20-30). I noticed that Papageorgiou1 has a paper "The Brownian Bridge Does Not Offer a Consistent Advantage in Quasi-Monte Carlo Integration" (2002). So does this point still hold?

  2. BB could reduce the computation effort on path-dependent derivatives. For example, during pricing of a barrier option, a path could be simulated with monthly scenarios of the factors; then BB could be used to estimate the probability of the path "knock-out" of the barrier. Which paper/book would you recommend on this topic?

by athos at April 17, 2014 01:02 PM


how to install gems for rbenv, using Ansible

Using Ansible, how can I use the gem (or other) module to install a gem (in this case, bundler) such that I can run the following command without error?

deployer@boxes-vm:~$ ~/.rbenv/bin/rbenv exec bundle install
rbenv: bundle: command not found

by user3097472 at April 17, 2014 01:00 PM

After upgrading to the latest version : org.specs2.execute.Failure required: T

After upgrading my scala to latest version i got that error:

type mismatch; found : org.specs2.execute.Failure required: T

My code:

  def shouldThrow[T <: Exception](exClazz: Class[T])(body: => Unit): T = {
    try {
    } catch {
      case e: Throwable =>
        if (e.getClass == exClazz) return e.asInstanceOf[T]
        val failure = new Failure("Expected %s but got %s".format(exClazz, e.getClass), "", new Exception().getStackTrace.toList, org.specs2.execute.NoDetails())                                                 
        val rethrown = new FailureException(failure)
        throw rethrown

    failure("Exception expected, but has not been thrown")

I got this error at last line failure("...")

Any idea whats goin on?

by ojciecmatki at April 17, 2014 12:56 PM

Scala Future does not return anything when allocating too much memory

Using Scala-IDE 3.0.3 (based on Scala 2.10.4), the following code completes correctly by displaying the first 10 values of the computed List from a future as well as the Future completedmessage:

import scala.concurrent._
import scala.concurrent.duration._
import scala.util.{Failure, Success}

object FutureNonBlocking extends App {

    val f1: Future[List[Int]] = future {
        val t = List.range(1, 50).filter(_ % 2 == 0)

    f1.onComplete {
        case Success(value) => println(value.take(10))
        case Failure(e) => println("Something bad happened")

    Await.complete(f1, 30 seconds)

However, changing the range List.range(1, 50) to List.range(1, 5000) does not display anything and (the Failure is not triggered). Logically, it seems to be related to a memory issue but I don't understand what is happening there.

Even stranger, running this code in a REPL does not cause the issue. What am I missing there?

by pmudry at April 17, 2014 12:47 PM

Kafka scalaConsole has ClassNotFoundException

When starting scalaConsole from gradlew, the main Scala runner is not found on the classpath:

05:21:45/kafka-0.8.1-src:43 $./gradlew scalaConsole
The TaskContainer.add() method has been deprecated and is scheduled to be removed in Gradle 2.0. Please use the create() method instead.
Building project 'core' with Scala version 2.8.0
Building project 'perf' with Scala version 2.8.0
:core:compileJava UP-TO-DATE
:core:compileScala UP-TO-DATE
:core:processResources UP-TO-DATE
:core:classes UP-TO-DATE
Exception in thread "main" java.lang.NoClassDefFoundError: scala/tools/nsc/MainGenericRunner
Caused by: java.lang.ClassNotFoundException:
    at Method)
    at java.lang.ClassLoader.loadClass(
    at sun.misc.Launcher$AppClassLoader.loadClass(
    at java.lang.ClassLoader.loadClass(
:core:scalaConsole FAILED

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':core:scalaConsole'.
> Process 'command '/System/Library/Java/JavaVirtualMachines/1.6.0.jdk/Contents/Home/bin/java'' finished with non-zero exit value 1

* Try:
Run with --stacktrace option to get the stack trace. Run with --info or --debug option to get more log output.


by javadba at April 17, 2014 12:46 PM

Planet Theory

A Glitch in the Maptrix

We must always be on the lookout for glitches in the Matrix—anomalies that give us a fleeting glimpse into the algorithms and data structures of the computer simulation that we call Reality. But there’s also the Maptrix, the alternative reality supplied by online mapping services. I spend a fair amount of time exploring that digital terrain, and lately I’ve noticed a few glitches. Exhibit 1 is a tank farm in Bayonne, New Jersey, near where Kill Van Kull enters New York Bay:

Bayonne tank farm as seen in Apple Maps satellite view, showing tanks as irregular polyhedral appoximations to a cylinder

I’ve seen a lot of oil tanks over the years, but never before have I encountered such ragged, faceted approximations to cylindrical form. These lumpy, polyhedral tanks suggest that in this little corner of industrial New Jersey, π has a value somewhat smaller than 3.14.

But the π peculiarity is ephemeral. The image above was captured from a laptop screen at the instant the landscape was first rendered in the Apple Maps program. Moments later most of the defects were magically healed, and the illusion of solid, circular reality reasserted itself:

Bayonne tank farm as seen in Apple Maps satellite view, showing tanks as normal cylinder

Here’s another example, from a tank farm in Carteret, New Jersey, just across the Arthur Kill from Staten Island. This time we’re looking down on the tanks from directly overhead. The image at left was captured as soon as the scene appeared on screen; at right is the rounded-out version that emerged a few second later. The software, again, is Apple Maps.

Linden tanks before and after

It’s not just Apple’s version of reality that has such anomalies. Here’s a sample from another source:

topologically defective tanks in Kearny NJ as seen by  Google Maps

The mangled petroleum tanks in this image are in Kearny, New Jersey, a few miles north of Bayonne along the Passaic River. In this case the picture was taken by Google’s eye in the sky, not by Apple’s. The distortion is different, but no less disturbing. Now it’s not just the geometry of the cylinders that has gone goofy but also the topology. Some of those tanks won’t hold oil (or any other fluid); they have holes in them. And notice how the spill-containment walls surrounding the tanks also look moth-eaten.

Finally, returning to Apple Maps, and scrolling just half a mile northwest from the Carteret tanks, we cross the Rahway River into Linden, New Jersey, where we come upon this alarming scene:

Linden tank farms, some 3D but some flattened

Toward the right side of the image we see more cylindrical tanks, some with faint, remnant traces of polyhedral approximation. But when your glance wanders to the upper left, you find that the world suddenly loses all depth. The tank farm over there, and the water treatment plant at the top of the frame, are merely painted on the landscape—trompe l’oeil structures that don’t trompe anyone.

Linden low angle

This image offers another view of the same Linden landscape, looking obliquely to the west or northwest. The road that runs through the scene from foreground to background, crossing the New Jersey Turnpike at the very top of the frame, is Tremley Point Road. Suppose you were driving west along that road. Just beyond the row of lumpy trees that extends from the left edge of the image toward the road, you would cross a mysterious boundary, leaving behind the pop-up 3D world and entering flatland. What would happen to you there? Would you be pancaked like those tanks, reduced to a two-dimensional object painted on the pavement, with a painted shadow to accompany you?

Leaving behind tank farms but still poking around in the same general neighborhood of northern New Jersey, I was able to record four stages in the “construction” of the Bayonne Bridge, which crosses Kill Van Kull between Bayonne and Staten Island. These are images from Google Maps, the first three captured at intervals of about a second, the last after a delay of a few seconds more:

stage 1 in the

stage 2 in the

stage 3 in the

stage 4 in the "construction" of the Bayonne bridge at 4:08.32 PM

In calling attention to these oddities in online map imagery, my aim is not to mock or belittle. To me, these maps are one of the marvels of the age. A century ago, it was a huge novelty and liberation to soar above the earth’s surface for the first time, and see the landscape spread out below as if it were a map. It’s no less remarkable that we have now transformed the experience of looking at a map into something like flying an airplane.

The first “satellite view” maps were just that: montages of images made from hundreds of miles above the earth’s surface, looking straight down. They portrayed the territory as an array of pixels, assigning an RGB value to every (lat, lon) pair on the surface of a sphere.

The next step was to add a digital elevation model, giving each point on the surface a z value as well as a color. This scheme allows us to gaze obliquely across the landscape and see a realistic rendering of mountains, river valleys, and other natural landforms. It works well as long as you don’t try to get too close: the model is well-suited to forests, but not to trees. And it doesn’t work well at all for manmade artifacts.

In representing engineered structures, one weakness of elevation maps is that any reasonable scheme for interpolating between sample points will tend to round over corners and sharp edges, so that buildings become lumpish mounds. Symmetries are also lost: the sampling fails to preserve the rectilinearity or circularity of the various objects we strew around the inhabited patches of the world. And the biggest problem with an elevation map is that it’s a mapping in the mathematical as well as the cartographic sense. The surface of the planet is defined by a smooth, single-valued function, assigning a unique elevation z to every (lat, lon) point on the sphere. Any line radiating from the center of the earth must cross that surface in exactly one point. As a result, there can be no vertical cliffs and no overhangs. Also no bridges. The surface defined by the elevation model can go under a bridge or over it, but not both.

The latest mapping programs are apparently addressing these issues by building explicit three-dimensional models of selected landscape features. I see evidence of several different techniques. The Bayonne Bridge model that assembles itself in the four frames above is clearly based on a triangulated mesh: all the surfaces making up of the envelope of the structure are decomposed into elementary triangles. The cylindrical tanks in the Apple Maps images seem to grow their circular form through an Archimedean process, in which a circle is defined as the limit of an n-gon as n goes to infinity. Elsewhere, I think we may be seeing some kind of spline curves or patches.

Having constructed the model and embedded it in the landscape, the next trick is to project the pixel pattern from a photograph onto the model surface, as a texture. This process too has a certain comical potential:

Barge and tug near Bayonne containerport

What we’re seeing here is apparently a tugboat nudging a barge equipped with a crane. The matching of surface pattern to three-dimensional form has gone badly awry, which gives the whole scene a toylike quality. I find it charming. In future years, when the Maptrix has become a hyperrealistic, real-time virtual world with day and night, weather and seasons—maybe with inhabitants who wave back at you—we’ll wax nostalgic over such quaint foibles.

by Brian Hayes at April 17, 2014 12:45 PM


scala parser combinators (json) in scala js

I'm trying to get the json parser from scala.util.parsing.json to work in scala js and replaced all that could be the cause of the Uncaught java.lang.RuntimeException: unimplemented error. But so far without success. I cannot find out in chrome and firefox what has to be replaced to avoid the java.lang.RuntimeException: unimplemented error.

by user3464741 at April 17, 2014 12:41 PM

Java/Scala - hmacSHA256 signature different every time

I'm getting this strange behaviour where sha256 signiture is coming out different for the same input and key every time. Not sure why. Here is the code and some of the printlns.

def apply(algorithm: String, data: String, key: String): Array[Byte] = {

  val _key = Option(key).getOrElse(throw new IllegalArgumentException("Missing key for JWT encryption via " + algorithm))
  val mac: Mac = Mac.getInstance(algorithm)
  val secretKey: SecretKeySpec = new SecretKeySpec(_key.getBytes, algorithm)
  val res = mac.doFinal(data.getBytes)

  println(s"$algorithm $data $key $res $secretKey")

Here is the logging from the testsuite using this code:

HmacSHA256 eyJIZXkiOiJmb28ifQ== secretkey [B@4959742d javax.crypto.spec.SecretKeySpec@fa77d7a8
HmacSHA256 eyJIZXkiOiJmb28ifQ== secretkey [B@6a790e37 javax.crypto.spec.SecretKeySpec@fa77d7a8
HmacSHA256 eyJIZXkiOiJmb28ifQ== secretkey [B@2347f330 javax.crypto.spec.SecretKeySpec@fa77d7a8
HmacSHA256 eyJIZXkiOiJmb28ifQ== secretkey [B@5298db1f javax.crypto.spec.SecretKeySpec@fa77d7a8
HmacSHA256 eyJIZXkiOiJmb28ifQ== secretkey [B@5cb80eb0 javax.crypto.spec.SecretKeySpec@fa77d7a8

Why are the signatures all different??

by JasonG at April 17, 2014 12:34 PM

How to compile rsyslog-7.6.1 with omzmq3 enabled

i am trying to compile rsyslog-7.6.1 with omzmq3 enabled (./configure --prefix=/usr --enable-omzmq3) . i am getting bellow error when i do sudo make

omzmq3.c:247:9: error: void value not ignored as it ought to be
         if(-1 == zsocket_connect(pData->socket, (char*)pData->description)) {
make[2]: *** [omzmq3_la-omzmq3.lo] Error 1
make[2]: Leaving directory `/home/naveen/Downloads/rsyslog-7.6.1/plugins/omzmq3'
make[1]: *** [all-recursive] Error 1
make[1]: Leaving directory `/home/naveen/Downloads/rsyslog-7.6.1'
make: *** [all] Error 2

Note: zmq version : 3.2.4 rsyslog version : 7.6.1

Any solution on this would be greatly appreciated!

by Naveen Subramani at April 17, 2014 12:24 PM

Planet Emacsen

Irreal: An Emacs Lisp Based Common Lisp

Lars Brinkhoff has a really interesting project up at GitHub. It’s emacs-cl, a Common Lisp implemented in Emacs Lisp. This probably isn’t all that useful but it sure is awesome. As far as I can tell, it’s a pretty complete. CLOS and pretty printing are missing but it has lexical closures, packages, multiple values, bignums, adjustable arrays, and other CL features.

This is one of those things that is noteworthy not because it’s going to change your life or be particularly useful but because it shows what a tool you already have can do. It’s often said that Emacs Lisp is crippled and not powerful enough to do real work. Guys like Nic Ferrier and Kris Jenkins have already put the lie to that notion but here is yet another example of what Emacs Lisp can do.

I’ve said several times that Emacs is the closest thing we have to a Lisp machine but mostly we don’t pursue that aspect of Emacs. Brinkhoff’s project reminds us that Emacs really does provide a Lisp environment and that the Lisp it provides is far from toothless.

by jcs at April 17, 2014 12:20 PM


How do you add to a custom function such as String => Int

Not quite sure on the wording of the title but what I'm trying to do is extend an environment defined as type Environment = String => Int I've got this from the Scala tutorial for java programmers. (Case Classes and Pattern Matching)

So i've got a function which can lookup in this environment.

type Environment = String => Int

val env: Environment = { case "x" => 5 }

def lookupEnv(env: Environment, x:String): Int =

def extendEnv(env: Environment, x:String, v:Int)

Any help would be appreciated. Thanks

by user2433237 at April 17, 2014 12:19 PM


Planet Theory

Should you reveal a P = NP algorithm?

A reader asks
What would you do if you could prove that P=NP with a time-complexity of n2 or better... moreover, would you publish it?  
There are all these statements of the good that could come of it. But how would the government react in its present state? Would it ever see the light of day? How would a person be treated if they just gave it away on the internet? Could a person be labeled a threat to national security for giving it away?
 I consider this a completely hypothetical and unlikely scenario. If you think this applies to you, make sure you truly have a working algorithm. Code it up and mint yourself some bitcoins, but not enough to notice. If you can't use your algorithm to mint a bitcoin, you don't have a working algorithm.

The next step is up to you. I believe that the positives of P v NP, like their use in curing diseases for example, greatly outweigh the negatives. I would first warn the Internet companies (like was done for heartbleed) so they can modify their systems. Then I would just publish the algorithm. Once the genie is out of the bottle everyone can use it and the government wouldn't be able to hide it.

If you can find an algorithm so can others so you should just take the credit or someone else will discover it. I don't see how one can get into trouble for revealing an algorithm you created. But you shouldn't take legal advice from this blog.

Once again though no one will take you seriously unless you really have a working algorithm. If you just tell Google you have an algorithm for NP-complete problem they will just ignore you. If you hand them their private keys then they will listen.

by Lance Fortnow ( at April 17, 2014 12:13 PM


Applying var or #' to a list of functions in Clojure

I'm trying to read metadata for a collection of functions in Clojure, but the var or reader special forms do not work unless they are directly dealing with the symbol.

; this works
(var my-fn)

; this doesn't
(defn val-it [x] (var x))
(val-it my-fn)

Is there any way to get this to work within the scope of another function?

by matsko at April 17, 2014 12:10 PM

Fred Wilson

Traces – A Group Show For Young Artists

It always makes me happy to see my daughter Emily send a tweet, my son Josh repost a song on SoundCloud, and my daughter Jessica post something to her Tumblr. They aren’t always so keen to use the services we back at USV, but they do come around to them from time to time.

But I think the biggest kick I got in this area was a few weeks ago when Jessica and three friends launched this Kickstarter.

It was funded quickly, over the course of a weekend, and they don’t need more money so if you are in the giving mood today, you might want to find another project to back.

If you live in NYC, you might want to attend the show. It will be at the Gowanus Loft in Brooklyn on June 6th, 7th, and 8th. The opening will be the evening of the 6th.

Although I have funded many projects on Kickstarter over the years, I have never made one. It was enlightening to watch Jessica and her friends Lenora, Zoe, and Lolita go through the process of defining and explaining their project, making a video, and scoping out the rewards. I gave them some advice here and there, mostly on the rewards which are great btw, and also on setting up Amazon payments. I came away with an appreciation for what a project creator goes through in making a Kickstarter. And of course, I experienced the thrill of pushing it out and the joy of seeing it funded.

I’ve said this many times on this blog, but I will say it again. Kickstarter is an iconic example of what makes the Internet so awesome. I am proud to be an investor and I am equally proud to be the father of a Kickstarter project creator.

by Fred Wilson at April 17, 2014 12:08 PM


Scala collection of strings - a way to output strings without double quotes

I have a collection of strings.

When I iterate through the collection (using .map()) to output the values, the string "Lipsum" gets outputed with double quotes

Inside Scala class:

case class Container(id: Int, name: String, url: String)
val tags = (i \\ "tags").flatMap{
    tag =>[JsArray] {
            element => Container(element \ "id", element \ "name", element \ "url")

Inside Template:

<div class="item"> { item =>

Collection output ( println( ):


Current output of the string looks like this:

<div class="item">

Desired output:

<div class="item">

How can I get rid of the double quotes?


by Alex at April 17, 2014 11:19 AM


Expected empirical entropy

I'm thinking about some properties of the empirical entropy for binary strings of length $n$ when the following question crosses my way:

$\underbrace{\large\frac{1}{2^{n}}\normalsize\sum\limits_{w\in\left\{0,1\right\}^{n}}\normalsize nH_{0}(w)}_{\large\#}\;\overset{?}{=}\;n-\varepsilon_{n}\;\;\;$

with $\;\;\lim\limits_{n\rightarrow\infty}\varepsilon_{n}=c\;\;\;$ and $\;\;\;\forall n:\;\varepsilon_{n}>0$

where $c$ is a constant.

Is that equation true? For which function $\varepsilon_{n}$ respectively which constant $c$?

$ $

$n=2\;\;\;\;\;\;\;\rightarrow\;\#=1 $
$n=3\;\;\;\;\;\;\;\rightarrow\;\#\approx 2.066 $
$n=6\;\;\;\;\;\;\;\rightarrow\;\#\approx 5.189 $
$n=100\;\;\;\rightarrow\;\#\approx 99.275 $
$n=5000\;\rightarrow\;\#\approx 4999.278580 $
$n=6000\;\rightarrow\;\#\approx 5999.278592 $

$ $


$ $
$H_{0}(w)$ is the zeroth-order empircal entropy for strings over $\Sigma=\left\{0,1\right\}$:

  • $H_{0}(w)=\frac{|w|_{0}}{n}\log\frac{n}{|w|_{0}}+\frac{n-|w|_{0}}{n}\log\frac{n}{n-|w|_{0}}$

where $|w|_{0}$ is the number of occurences of $0$ in $w\in\Sigma^{n}$.

The term $nH_{0}(w)$ corresponds to the Shannon-entropy of the empirical distribution of binary words with respect to the number of occurences of $0$ respectively $1$ in $w\in\Sigma^{n}$.

More precise:
Let the words in $\left\{0,1\right\}^{n}$ be possible outcomes of a Bernoulli process. If the probability of $0$ is equal to the relative frequency of $0$ in a word $w\in\left\{0,1\right\}^{n}$, then the Shannon-entropy of this Bernoulli process is equal to $nH_{0}(w)$.

At this point, my question should be more reasonable since the first term normalizes the Shannon-entropies for all empirical distributions of words $w\in\left\{0,1\right\}^{n}$.
Intuitively I thought about getting something close to the Shannon-entropy of the uniform distribution of $\left\{0,1\right\}^{n}$, which is $n$.
By computing and observing some values I've got the conjecture above, but I'm not able to prove it or to get the exact term $\varepsilon_{n}$.

It is easy to get the following equalities:

$\large\frac{1}{2^{n}}\normalsize\sum\limits_{w\in\left\{0,1\right\}^{n}}\normalsize nH_{0}(w)\;\;=\large\frac{1}{2^{n}}\normalsize\sum\limits_{w\in\left\{0,1\right\}^{n}}\normalsize |w|_{0}\log\frac{n}{|w|_{0}}+(n-|w|_{0})\log\frac{n}{n-|w|_{0}}$

$=\large\frac{1}{2^{n}}\normalsize\sum\limits_{k=1}^{n-1}$ $n\choose k$ $\left(k\log\frac{n}{k}+(n-k)\log\frac{n}{n-k}\right)$

and it is possible to apply some logarithmic identities but I'm still in a dead point.

(the words $0^{n}$ and $1^{n}$ are ignored, because the Shannon-entropy of their empirical distributions are zero)

Any help is welcome.

by Danny at April 17, 2014 11:13 AM



Theta functions of automata relations

Let $A,B$ be two automata over the same alphabet $\Sigma$; they are supposed to be complete, strongly connected DFAs. We denote by $._A$ (resp. $._B$) the action induced by $\Sigma^*$ over $Q(A)$, i.e. $u ._A s$ designates the state reached from $s$ upon reading $u$ - note that this is more commonly denoted by $\delta_A(u,s)$ but we adopt the first notation for conciseness. Say that an "automata relation" between $A$ and $B$ is a relation $R \subseteq Q(A) \times Q(B)$ such that:

(*) if $R(s,t)$ holds, then for any $u,v \in \Sigma^*$ we have: $u ._A s = v ._A s \Rightarrow u ._B t = v ._B t$.

Given $R$ and $t \in Q(B)$, we then define $R^{-1}(t) = \{ s \in Q(A) : R(s,t) \text{ holds} \}$.

Given such a relation, it is possible to define an action of $Q(A)$ over $Q(B)$ as follows: given $t \in Q(B)$ and $s \in Q(A)$, we define $s^{-1} t$ as the unique state $t' \in Q(B)$ such that for every $s' \in R^{-1}(t)$, $L_A(s',s).t = \{t'\}$. Note that the existence of $t'$ follows from the strong connectedness of $A$, and that the unicity follows by observing that if $u,v \in L_A(s',s)$, we have $u ._A s' = v ._A s' = s$ and thus $u ._B t = v ._B t = t'$.

Now, given the tuple $T = (A,B,R)$ and given two weight functions $f,g$ over $Q(B)$, let us define the "theta function" $\Theta_{T,f,g}(s) = \sum_{t \in Q(B)} f(t) g(s^{-1} t)$.

Question: what are the interesting questions to ask about this notion?

(Some examples and suggestions should follow, I'm still trying to clear my thoughts on this problem.)

by Super0 at April 17, 2014 11:01 AM


Using image processing to infer if a seat is occupied [migrated]

I'm looking into different attendance taking options for a particularly large seminar class (600ish students) and I was thinking - what if a picture was taken of the seats, maybe with a visual cue on them such as reflective tape. Another picture was taken during a class with assigned seats and if the reflective tape is covered, it could be a good indication of whether the seat is occupied or not.

What kind of software should I be looking at to achieve this effect. The main things are being able to detect an object (maybe some kind of material which is obvious to the computer) and detecting if its gone

Thank you for your help - let me know if I should post this somewhere else

by user1066886 at April 17, 2014 10:59 AM


Maximum matching versus preferential assignment

There are several ways to solve the marriage problem. The "preferential assignment" approach consists in forming couples on the basis of preferred characteristics expressed by each individual. An alternative approach consists in selecting preferred partners among lovers: suppose that you have the graph $G$ of sexual relations, and that each relation $e$ is rated with some preference factor $w(e)$, then computing a maximum weighted matching in $G$ would aim at optimizing the sexual adequacy between lovers.

If there are only heterosexual relationships with two sexes (men and women), then this is a bipartite matching problem, which is conjectured to be in $NC_2$ . Now suppose that a devious adversary had the following abilities: (i) control of the sexual performances of the individuals, (ii) ability to solve parallel problems efficiently. Then this adversary could trick the heterosexual group into forming "bad unions" in a pessimistic sense, i.e. leading to increased instability or corruption.

So my question is: what are the remedies to this problem? Does the introduction of same-sex relations or ternary relations change the situation? I suspect that the goal would be to obtain a P-complete matching problem to defeat the parallel adversary.

by Super0 at April 17, 2014 10:58 AM

Crossover of rationality from finite to integer alphabets?

Rational relations are defined over a finite, unordered alphabet; they can be defined e.g. in terms of transducers. I was wondering if there had been attempts to define a notion of rationality for relations over a totally ordered alphabet (e.g. integers)? I'd like to suggest a possible definition for a "transducer" in this setting, but it is not clear whether it comes with interesting properties.

Assume that you have a word of the form $w = | w_1 | w_2 | ... | w_l |$ where each $w_i$ is a sequence of integers, and each bar corresponds to a marker $m_i$. Given an index $i$, let $L^{-}_{i}(w)$, resp. $L^+_i(w)$, denote the letter to the left, resp. to the right, of marker $m_i$, and let $P(w)$ denote this set of $2l$ positions. Now, given two positions $p,q \in P(w)$ and an integer $r$, define the operation $D(p,q,r)$ as follows:

  • if $w[p]=w[q]+r$, then move the letter at position $p$ to position $q$;
  • otherwise, do nothing.

For simplicity, we assume that these operations are "monotone", i.e. they always involve a pair of positions with $q$ to the right of $p$. We then define a "transducer" as a tuple $F = (S,l,T)$ where $S$ is a set of states, $l$ is the number of allowed markers, and $T$ is a set of transitions. Each transition has the following form: If the current state is $s$ and the operation $D(p,q,r)$ is valid, then perform it and move to state $s'$.

$F$ then defines a relation $R_F$ as follows: given two words $u,v$, $R_F(u,v)$ holds iff there is an execution of $F$ starting with $(|) (u) (|^{l-1})$ and ending with $(|^{l-1}) (v) (|)$ (parenthesis added for clarity). By extension, $F$ defines a language $L_F$ which is the set of words $u$ such that $R_F$ sorts $u$, i.e. $R_F(u,v) \Leftrightarrow v$ is the sorted version of $u$.

This seems to be simplest possible definition in this setting, and it's already quite powerful as it captures some well-known permutation classes (stack-sortable, deque-sortable). It's probably a difficult question, but it would be interesting to know the expressive power of these machines, and to look at possible extensions (as this definition enforces a static boundary and no erasures).

by Super0 at April 17, 2014 10:56 AM


Prove $\ L = {0^n1^n2^m : n \neq m}\ $ is not context-free

Please check my work

Prove $\ L = {0^n1^n2^m : n \neq m}\ $ is not context-free

Assume L is a context-free language. $\ \exists p\in\mathbb Z^+:\forall s\in L \left | s\right |\ge p. s = uvxyz, \left | vy \right | \gt 0, \left | vxy \right | \le p. S_i = uv^ixy^iz\in L \forall i\ge 0.\ $

Let $\ s = 0^{p!}1^{p!}2^{(p+1)!}\ $

Case 1: $\ vxy = 0^j\ $ for $\ 1 \le j \le p\ $

$\ S_2 = 0^{p!+j}1^{p!}2^{(p+1)!}\ $

Looking at 0 and 1, $\ p! + j \ne p!\ $, meaning that this is impossible

Case 2: $\ vxy = 1^j\ $ for $\ 1 \le j \le p\ $

$\ S_2 = 0^{p!}1^{p!+j}2^{(p+1)!}\ $

Looking at 0 and 1, $\ p! \ne p! + j\ $, meaning that this is impossible

Case 3: $\ vxy = 2^j\ $ for $\ 1 \le j \le p\ $

$\ S_i = 0^{p!}1^{p!}2^{(p+1)! + (i - 1)j}\ $

Find an i where $\ p! = (p+1)! + (i - 1)j\ $

$\ p! - (p+1)! = (i - 1)j\ $

$\ p! - (p+1)p! = (i - 1)j\ $

$\ p! (1 - (p+1)) = (i - 1)j\ $

$\ p*p! = (i - 1)j\ $

$\ (p*p!)/j = i - 1\ $

$\ (p*p!)/j + 1 = i\ $

$\ p!\ $ is divisible by $\ j\ $ because $\ 1 \le j \le p\ $

The left side is greater than or equal to 2.

As a result, $\ i\ $ exists.

Case 4: $\ vxy = 0^j1^k\ $ for $\ 1 \le j + k \le p\ $

$\ i = (pp!)/j + 1\ $ $\ i = (pp!)/k + 1\ $

One of these two equations must hold true. If only one holds true, then $\ n_a(s) \ne n_b(s)\ $. If both hold true, then $\ n = m\ $. Both of these cases are contradictions.

Case 5: $\ vxy = 1^j2^k\ $ for $\ 1 \le j + k \le p\ $

$\ S_2 = a^{p!}b^{p!+j}c^{(p+1)!+k}\ $

For a and b, $\ p! \ne p! + j\ $, so contradiction

The pumping lemma failed to hold, so L is a not a context-free language.

by user1136671 at April 17, 2014 10:55 AM


Complexity of ambiguous parsing?

Consider the following problem. We are given a set of words $W \subseteq \Sigma^*$ and a set of sentences $S \subseteq W^*$. The "ambiguous parsing" problem consists, given a word $w \in \Sigma^*$, to enumerate the sentences $s \in S$ that parse to $w$ (meaning that $s$ consists of the letters $w_1,\ldots,w_k$ and $w$ is the concatenation of the corresponding words).

The problem has an interesting variant which is the search of spoonerisms: we now allow some number $M$ of letters swap, and we want to enumerate the valid sentences obtainable from $w$ with at most $M$ swaps.

So my question is: under what condition on the language $S$ can the above problem be solved in linear time? By this I mean time $O(|W|+|S|+|w|+c)$ where $c$ is the number of solutions.

(NOTE ADDED: in response to the comments below, the vocabulary $W$ is assumed to be finite, and the set $S$ can be infinite but "finitely described" by some device $\Delta_S$. As we can report all meaningful subwords of $w$ in $O(|w|)$ time, it may be possible to reduce my question to the enumeration of the accepting paths of a non-deterministic automaton or the derivations of an ambiguous grammar. What I am looking for is restricted models that allow a linear or amortized-constant delay enumeration of the parsings.)

by Super0 at April 17, 2014 10:54 AM

The power of binary hierarchies?

Consider the following "election" process for a set of individuals $P$. Fix two constants $c$ and $c'$, Suppose that each $p \in P$ has a fixed "karma" $k(p)$, and that he can "fool" every person $p'$ such that $k(p') < c k(p)$. We construct a forest $F$ with vertex set $P$, which is initially empty. Repeat the following process:

  • choose three individuals $p_1,p_2,p_3$ in $P$ minimizing the sum $k(p_1) + k(p_2) + k(p_3)$;

  • select $p_i$ whose karma is maximal (resp. minimal, median);

  • modify $F$ by making $p_i$ a parent of the other two nodes, multiply its karma by $c'$, and remove the other two nodes from $P$ (as they are no longer in the top of the hierarchy).

We may assume that the process is repeated until $F$ is a tree. This seems rather fuzzy at this point, but I am trying to understand what is the interplay between (i) the initial distribution of the karmas, and (ii) the choice of the max/min/med criterion for election. It seems intuitively that we should aim at minimizing the number of persons that can be fooled and/or the karmas discrepancy between the top and the bottom of the hierarchy.

(Note: this is a first attempt at modelling the problem of "power abuse" in hierarchies but this is probably not satisfactory as influence can actually flow in both directions of the tree).

by Super0 at April 17, 2014 10:53 AM



Totally ordered multicast with Lamport timestamps

I'm studying Distributed Systems and synchronization and I didn't catch this solution of totally ordered multicast with Lamport timestamps. I read that it doesn't need ack to deliver a message to the application, but

"It is sufficient to multicast any other type of message, as long as that message has a timestamp larger than the received message. The condition for delivering a message m to the application, is that another message has been received from each other process with a large timestamp. This guarantees that there are no more messages underway with a lower timestamp."

This is a definition from a book. I tried to apply this definition to an example but I guess that something is wrong.


There are 4 processes and they multicast the following messages (the second number in parentheses is timestamp) :
P1 multi-casts (m11, 5); (m12, 12); (m13, 14);
P2 multi-casts (m21, 6); (m22, 14);
P3 multi-casts (m31, 5); (m32, 7); (m33, 11);
P4 multi-casts (m41, 8); (m42, 15); (m43, 19).

Supposing that there are no acknoledgments, can I guess which messages can be delivered and which not? Based on definition, my guess is that only m11 and m31 can be delivered to the application, because all the other messages received will have a timestamp greater, but this seems very strange, and I think I didn't understand the delivery condition very well.

by Fabrizio at April 17, 2014 10:44 AM



Scala - Why are overloaded methods not called based on runtime class?


Given a simple class hierarchy

abstract class Base {}
class A extends Base {}
class B extends Base {}

And a typeclass

trait Show[T] {
  def show(obj: T): String

With overloaded implementations

class ShowBase extends Show[Base] {
  override def show(obj: Base): String = "Base"
object ShowA extends ShowBase {
  def show(obj: A): String = "A"
object ShowB extends ShowBase {
  def show(obj: B): String = "B"

When executing following test-case

Seq((new A(), ShowA), (new B(), ShowB)).foreach {
  case (obj, showImpl) => println(, obj.getClass.getSimpleName)

Should produce (A,A) \n (B,B), but produces (Base,A) \n (Base,B) instead.


What's going on here? Shouldn't the method with the most specific runtime type be called - Polymorphism 101?

This issue looks similar to another question where a type parameter prevents the correct resolution of which method to call. However, in my case the type parameterized show method is provided with actual implementations in contrast to type parameterized method in the other question.

Naive solution

Extending the ShowA implementation (analogue for ShowB):

object ShowA extends ShowBase {
  def show(obj: A): String = "A"
  override def show(obj: Base): String = {
    require(obj.isInstanceOf[A], "Argument must be instance of A!")

gives the expected output. The problem is mixing A with ShowB will result in an exception.

by mucaho at April 17, 2014 10:40 AM


URM simulate TMs/TMs simulate URM [on hold]

Theorem: partial function is only computable by a Unlimited Register Machine iff it is computable by a standard TM

How to prove this theorem? Maybe simulate these two machines in both way?

Can anyone provide some more details? Thanks.

by geasssos at April 17, 2014 10:28 AM


Exponential gap on neural network layers

I read it here that there are function families which need $\mathcal{O}(2^n)$ nodes on neural network with at most $d - 1$ layers to represent the function while need only $\mathcal{O}(n)$ if the neural network has at least $d$ layers. It was referring to a paper by Hastad. I didn't find it. Could someone tell me the title of the paper? I think this is a really fascinating theoretical result.

by jakab922 at April 17, 2014 10:21 AM

Fixed parameter tractable algorithms for graph isomorphism

What are the future directions in fixed parameter tractability of graphs isomorphism after these two recent papers:

by Kumar at April 17, 2014 10:21 AM


How to downgrade zeromq from version 4.0.4 to 3.2.4

I Have installed zeromq 4.0.4 in my ubuntu machine.i have to downgrade my zmq to 3.2.4. i have tried sudo make uninstall , sudo make clean but none of them worked so far. and i also installed 3.2.4 from source. but still my system showing zmq version as 4.0.4. How can i get rid of old zmq files (Clean uninstall of 4.0.4)

by Naveen Subramani at April 17, 2014 10:11 AM


Sampling problem in portfolio optimization

In a summary I am trying to do the following

  1. Bond Subset 1 : Get list of USD Bonds --> Filter out Bonds which have YTM > y% DUR > 10 Y etc. .. This gives us Bonds which we are interested in. So in the end we will have a subset of these Bonds in final portfolio
  2. Bond Subset 2 : List of Bonds which we surely need to include . Unlike previous subset these bonds should definitely include in final portfolio
  3. Constraints :
    a. Match Total Duration and Key Rate Duration : Given the DV01 profile of the client’s liability the resulting portfolio should match this profile with +- X% deviation b. Apply Sector Constraints : Total in Financial Sector <= 0.25 of Total etc. .. c. Lower and Upper Limits of investment in single security d. Maximum Number of securities in resulting portfolio = N
  4. Objective : Achieve Yield = Y % or Maximize Yield

So I created a set of inequalities to satisfy these contraints and I have my objective function defined as well. I am using matlab fmincon to achieve this.

Problem fmincon tries to include all bonds in the optimization and the results is such that constraints are not satisfied. I need to be able to selectively pick or remove bonds from the Subset 1. Which means that I need the solver to have variable number of variables. To solve this problem I am looking at finding best way to sample subsets of Bonds 1 and run solver on this so that I am left with portfolio with satisfies the contraints. Does anyone have any ideas on such a sampling problem in portfolio optimization. (Please dont suggest considering all combinations of bonds possible to find subsets for which constraints are satisfied and I get max yeild, since performance of the code is very important here and number of securities are about 1000 )

by ash at April 17, 2014 10:08 AM


Algorithm for queue with multiple repeated different interval events

I have N events, each repeats after some time ti. I want a queue where I can pop out next event.

For example, I have events A,B,C with intervals 2,3,5 At the beginning they all are in the queue with values:

A-2, B-3, C-5

When I take out event, it should be A, after that I put it back, but add +2.

B-3, A-4, C-5

Next is B. I add +3

A-4, C-5, B-6

Next is A, I add +2

C-5, B-6, A-6

and so on.

What could be the best algorithm for that?

by codez at April 17, 2014 10:00 AM



Fetching the current retrieve function from an Akka actor from akka-testkit

I have an akka actor that uses logic like this in my retrieve method:

become(nextState("some data"))

How do I verify that this worked from akka-testkit (i.e. that the new retrieve method is nextState and that "some data" was passed to it)?


by Oskar at April 17, 2014 09:40 AM


Is the language $L = \{a^nb^m : n = 2^m\}$ context-free?

Is the language $L = \{a^nb^m : n = 2^m\}$ context-free?

Assume L is a context-free language. Then $\ \exists p\in \mathbb{Z}^{+}:\forall s\in L\left | s \right |\geq p. s = uvxyz,\left | vy \right |\geq 1,\left | vxy \right |\leq p. s_i = uv^{i}xy^{i}z\in L\forall i\geq 0\ $.

Let s = $\ a^{2^p}b^{p}\ $

Pumping i times will give a string of length $\ 2^{p} + (i - 1)*j\ $ a's and $\ p + (i - 1)*k\ $ b's where $\ 1 \leq j + k \leq p\ $

Case 1: $\ j \neq 0\ $ $\ k \neq 0\ $


Case 2: $\ j = 0\ $ $\ k \neq 0\ $


Case 3: $\ j \neq 0\ $ $\ k = 0\ $


It can be concluded from this that L is not a context-free language.

by user1136671 at April 17, 2014 09:32 AM


How to receive emails using imap in scala play using play libraries?

Is there any way to receive/synch emails using imap in scala play?

But i don't want to use javax.mail library

Instead i am looking for play libraries

Please suggest if any samples/ideas

by James at April 17, 2014 09:31 AM



Simple Iteration over case class fields

I'm trying to write a generic method to iterate over a case class's fields :

case class PriceMove(price: Double, delta: Double)

def log(pm : PriceMove) { info("price -> " + price + " delta -> " + delta)}

I need to make log able to handle any case class. What needs to be the argument type for log to handle case classes only and the actual generic field iteration code?

by Vaibhav at April 17, 2014 09:20 AM




Bond Spread Drivers

I have some work to do on the drivers of government bond spreads - ie. across terms (not across governments) of the yield curve, say 5yr and 20yr bond spreads from the same government issuer - and am having a bit of conceptual difficulty with this.

I have done some reading and get a sense that potential drivers could be: credit risk (measured by Credit default swap (CDS) rates); liquidity risk (measured by bid-ask apreads); risk aversion; crisis-period variable (a indicator variable assigned a value 1 during the latest financial crisis);

Should I just take the difference of these variables (and assume the differencing renders the variables stationary) and then run OLS regression of the bond spreads on the drivers?

Are there better ways to go about this? Would PCA have any application here?

Thanks in advance for any advice

by Nick at April 17, 2014 09:00 AM



How to find out what are accept, loop and reject in this Turing Machine?

I am trying to find out accept, loop and reject in this Turing Machine because it doesnt have any...I am not sure if I completely understand it but this is the turing machine I am talking about...

enter image description here

so accept state would be --> Accept(T2) - all words with a a
loop state would be --> Loop (T2) - /\

reject state would be --> Reject(T2) - Strings with b

am I on a right track ? how could I do find accept, loop and reject on this TM? thanks!

by Dana at April 17, 2014 08:21 AM

Radix Tries, Tries and Ternary Search Tries

I originally posted this over on Stackoverflow but realised that it may be better suited to the Computer Science zone. I'm currently trying to get my head around the variations of Trie and was wondering if anyone would be able to clarify a couple of points. I got quite confused with the answer to this question, especially within the first paragraph.

From what I have read, is the following correct? Supposing we have stored n elements in the data structures, and L is the length of the string we are searching for:

  • A Trie stores its keys at the leaf nodes, and if we have a positive hit for the search then this means that it will perform $O(L)$ comparisons. For a miss however, the average performance is $O(\log_2(n))$.

  • Similarly, a Radix tree (with $R = 2^r$) stores the keys at the leaf nodes and will perform $O(L)$ comparisons for a positive hit. However misses will be quicker, and occur on average in $O(\log_R(n))$.

  • A Ternary Search Trie is essentially a BST with operations <,>,= and with a character stored in every node. Instead of comparing the whole key at a node (as with BST), we only compare a character of the key at that node. Overall, supposing our alphabet size is $A$, then if there is a hit we must perform (at most) $O(L \cdot A) = O(L)$ comparisons. If there is not a hit, on average we have $O(\log_3(n))$.

With regards to the Radix tree, if for example our alphabet is $\{0,1\}$ and we set $R = $4, for a binary string $0101$ we would only need two comparisons right? Hence if the size of our alphabet is A, we would actually only perform $L \cdot (A / R)$ comparisons? If so then I guess this just becomes $O(L)$, but I was curious if this was correct reasoning.

by user3023621 at April 17, 2014 08:18 AM


Relative paths in SBT builds with multiple nested projects

I have an SBT build that looks something like this :

name := "foo"

version := "1.0"

lazy val schemaFile = settingKey[File]("File containing FIX schema for generator")

schemaFile := (resourceDirectory in Compile).value / "input.xml"

sourceGenerators in Compile <+= Def.task {
  import my.project.SourceGenerator
  lazy val filename : String = schemaFile.value.getName.toLowerCase.stripSuffix(".xml") + ".scala"
  lazy val outFile : File = (sourceManaged in Compile).value / filename

SourceGenerator.generate() converts the input *.xml into a *.scala File.

This seems to work fine when run by itself. Now, this is actually a sub-directory of a larger set of projects, which have dependencies between them. So, I've tried to follow the SBT guide to create a parent bulid.sbt file one level up, and included this project like so:

// build.sbt in the directory directly above foo's build.sbt
lazy val foo = project

lazy val bar = project dependsOn foo

Now when I try to load this configuration I get an error like this:

Project loading failed: (r)etry, (q)uit, (l)ast, or (i)gnore? r
C:\**\rootproj\foo\build.sbt:7: error: type mismatch;
 found   :
 required: T
schemaFile := (resourceDirectory in Compile).value / "input.xml"

What's going on here? Am I going about this the wrong way - i.e. isn't it safe to nest sbt builds inside one another, with a parent 'root' build?

p.s. on a side-note, can somebody point to the api doc for / operator, and <+=

by Luciano at April 17, 2014 08:09 AM


Markov switching model estimation

We are testing Markov switching models to forecast risk regimes, similar to the paper by Kritzman, Page and Turkington. We find that in some cases the Baum-Welch algorithm converges very slowly or not at all. Apparently this issue is known for hidden Markov models in speech recognition and other fields, where several alternatives to Baum-Welch have been proposed: for example, Bayesian estimation by M. Johnson or estimators based on an entropic prior by M. Brand.

Have these alternatives been applied to financial times series? Is there a reason to favor one particular approach, apart from slow convergence?

by Felix at April 17, 2014 08:06 AM


OpenVPN FreeNAS BSD installation issues

im following this guide to install OpenVPN on my FreeNAS system.

I have ran in to the issues detailed below when trying to create the CA.cert.

[root@freenas] /mnt/NAS/openvpn# chmod -R 755 easy-rsa/2.0/*
[root@freenas] /mnt/NAS/openvpn# cd easy-rsa/2.0
[root@freenas] /mnt/NAS/openvpn/easy-rsa/2.0# sh
Please source the vars script first (i.e. "source ./vars")
Make sure you have edited it to reflect your configuration.
# . ./vars
NOTE: If you run ./clean-all, I will be doing a rm -rf on /mnt/NAS/openvpn/easyrsa/2.0/keys
# ./build-ca
Please edit the vars script to reflect your configuration,
then source it with "source ./vars".
Next, to start with a fresh PKI configuration and to delete any
previous certificates and keys, run "./clean-all".
Finally, you can run this tool (pkitool) to build certificates/keys.

I have tried creating the keys directory manually as i have read this has worked for others but still no luck. Being new to BSD I've hit a road block and looking for some advice.

Any ideas?

cheers guys


When trying to source ./vars i get the following output

[root@freenas] /mnt/NAS/openvpn/easy-rsa/2.0# source ./vars
export: Command not found.
export: Command not found.
export: Command not found.
export: Command not found.
EASY_RSA: Undefined variable.
export: Command not found.
EASY_RSA: Undefined variable.

by demonLaMagra at April 17, 2014 08:04 AM

Planet FreeBSD

January-March, 2014 Status Report

The January-March, 2014 Status Report is now available with 41 entries.

by FreeBSD News Flash at April 17, 2014 08:00 AM



How to Define Custom partitioner for Spark RDDs of equally sized partition where each partition has equal number of elements?

.I am newbie to Spark.I have large dataset of elemenst[RDD] and i want to divide it into two exactly equal sized partitions maintaining order of elements.I tried using RangePartitioner like var data= partitionedFile.partitionBy(new RangePartitioner(2, partitionedFile)). This doesnt give satisfactory results becoz it divides roughly but not exactly equal sized maintaining order of elements.. for example if their are 64 elements ..we use Rangepartitioner.>>>then it divides in to 31 elements and 33 elements..

I need partitioner such that i get exactly frirst 32 elements in one half and other half contains second set of 32 elements.. Could you please hlep me by suggestiing how to use customised partitioner such that I get equally sized two halves...maintaing the order of elements..

Please help me...

by yh18190 at April 17, 2014 07:41 AM


Independent set size of a large girth graphs

For triangle-free (girth $\geq 4$) graph $G$. The following theorem holds true

Theorem (Ajtai et al.): For a triangle-free graph $G$ with maximum degree $\Delta$,

$$\alpha(G) \geq \frac{n(G)}{8d}\log_2d.$$

Where $n(G)$ is the vertex size of the graph, $d$ is the avg degree and $\alpha(G)$ is the size of maximum independent set.

My Question : Are there extensions of above result for graphs with girth $\geq l$ ?

by Bagaria at April 17, 2014 07:36 AM

What are some examples where the Catalan numbers show up in algorithms/data structures?

For some variants of RMQ data structures, the number of Cartesian trees (i.e. the Catalan numbers) is a part of the running-time analysis. What are some other examples where the Cataln numbers show up (either in a data structure itself or in a run-time analysis)?

by user830841 at April 17, 2014 07:34 AM

Whether no local degeneracy in PLC implies edge-protection?

In a paper on Constrained Delaunay tetrahedralization (Meshing Piecewise linear complexes with Constrained Delaunay tetrahedralizations), in section 3, in proof of theorem 2, author claims that if piecewise linear complex(PLC) X satisfies theorem 2 then segments of X will also be edge-protected.

As far as I understand from the paper, if X contains no local degeneracy then it guarantees that there will be no pair of adjacent tetrahedrons with vertices sharing a common sphere but it does not imply that all segments of X will have a unique circumsphere which does not have any other vertex on or inside it(i.e., the edge-protection property).

I would like to know whether I have misunderstood the concept of edge-protection or if there is some issue with the claimed relation between theorem 1 and theorem 2?

by Pranav at April 17, 2014 07:32 AM

What is the intuition behind Steiner point insertion rules?

I am reading a paper on Constrained Delaunay tetrahedralization (Meshing Piecewise linear complexes with Constrained Delaunay tetrahedralizations). It mentions rules for inserting steiner points but talks very little about intuition behind those rules. I would like to know if anyone has explanation for those rules.

Specifically, I would like to know the justification for why those rules will result in Delaunay tetrahedralization with all constraint segments recovered.

P.S.: I have asked same question on ResearchGate Q&A section also but I have not received any response. I have also contacted the authors regarding this but still got no response.

by Pranav at April 17, 2014 07:29 AM

Multidimensional Knapsack W[1]-hard when parameterized by dimension

Under Multidimensional knapsack STRONGLY NP-complete it was discussed that the Multidimensional Knapsack problem is strongly NP-hard.

Within this discussion the question whether the problem is W[1]-hard when parameterized by dimension d was mentioned, but not finally answered. Is the problem solved?

by Thomas at April 17, 2014 07:20 AM


How to read string into Enumeration without throwing an exception?

I have some data as .csv. It gets input by hand into Excel, and I have to load it into a Scala program.

Most of the fields of a single record are a freetext string or a number, but a few ones come from a prespecified short list of possible strings. I created enumerations for this type of information, like

object ScaleType extends Enumeration {
    val unknown = Value("unknown")
    val nominal = Value("nominal")
    val ordinal = Value("ordinal")
    val interval = Value("interval")
    val rational = Value("rational")

I read the csv into an Iterator[Array[String]], and for each line, I create new instance of a scala class, setting its properties from the information in the line. Assume that in the csv, it is line(8) which tells us the scale type we used. So, line is an Array[String], line(8) is a string, and its content should be one of the values listed in the enumeration.

As the data is input by hand, it contains errors. I used an if statement to find out if line(8) is completely empty, but I can't figure out how to see if the string I am getting is a scale type at all.

val scale = if(line(8).length > 0)
        else ScaleType.unknown

What I'd like to happen: If somebody has entered the scale type "rtnl", I want the above to set the val scale to ScaleType.unknown and log the problem (something like print("scale reading error on line " + lineNumber will be enough). Instead, an exception gets thrown and I don't know how to check for a problem before the exception happens.

by rumtscho at April 17, 2014 07:05 AM


Reversible Turing tarpits?

This question is about whether there are there any known reversible Turing tarpits, where "reversible" means in the sense of Axelsen and Glück, and "tarpit" is a much more informal concept (and might not be a very good choice of word), but I'll do my best to explain what I mean by it.

What I mean by "tarpit"

Some models of computation are designed to be useful in some way. Others just happen to be Turing complete and don't really have any particularly useful properties; these are known as "Turing tarpits". Examples include the language Brainfuck, the Rule 110 cellular automaton, and the language Bitwise Cyclic Tag (which I like because it's very easy to implement and any binary string is a valid program).

There is no formal definition of "Turing tarpit", but for this question I'm using it to mean a fairly simple system (in terms of having a small number of "rules") that "just happens" to be Turing complete, without its internal state having any obvious semantic meaning. The most important aspect for my purposes is the simplicity of the rules, rather than the lack of obvious semantics. Basically we're talking about the sort of things that Stephen Wolfram once wrote a very large book about, although he didn't use the word "tarpit".

What I mean by "reversible"

I'm interested in reversible computation. In particular, I'm interested in languages that are r-Turing complete, in the sense of Axelsen and Glück, which means that they can calculate every computable injective function, and can only calculate injective functions. Now, there are many models of computation that are reversible in this sense, such as Axelsen's reversible universal Turing machine, or the high-level reversible language Janus. (There are many other examples in the literature; it's an active area of research.)

It should be noted that Axelsen and Glück's definition of r-Turing completeness is a different approach to reversible computing than the usual approach due to Bennett. In Bennett's approach a system is allowed to produce "garbage data" that is thrown away at the end of the computation; under such conditions a reversible system can be Turing complete. However, in Axelsen and Glück's approach, the system is not allowed to produce such "junk data", which restricts the class of problems it can compute. (Hence, "r-Turing complete" rather than "Turing complete".)

Note: the Axelsen and Glück paper is behind a paywall. This is unfortunate - to my knowledge there is not currently any non-paywalled resource on the subject of r-Turing completeness. I'll try to start a Wikipedia page if I have time, but no promises.

What I'm looking for

The examples of reversible computing mentioned above are all rather "semantically laden". This is a good thing in most contexts, but it means that the rules required to update their state at each time step are fairly complex. I'm looking for the "tarpits" of reversible computing. That is, more-or-less arbitrary systems with quite simple rules that "just happen" to be r-Turing complete languages. I reiterate that there is no formal definition of what I'm looking for, but I'll know it when I see it, and I think it's a reasonable thing to ask about.

There are a number of things I know of that almost fit the bill, but not quite. There are several reversible cellular automata that have been shown to be Turing complete. Langton's ant (a kind of two-dimensional Turing machine with a fairly arbitrary and quite simple reversible state transition function) is also Turing complete, as long as its initial conditions are allowed to contain infinite repeating patterns. However, with these systems it's not trivial to define a mapping from their state to an "output" in such a way that no junk data gets thrown away. I'm interested specifically in systems that can be thought of as taking an input, performing some sequence of (reversible) transformations on it, and then (if they terminate) returning some output.

(I'm hoping this question will be easier to answer than my previous related one about a reversible equivalent to the lambda calculus.)

by Nathaniel at April 17, 2014 06:51 AM


How does logging effect Quickfix performance?

I am using .net/c++ version of quickfix. How does logging effect Quickfix performance? If I disable logging to file, can it help to increase performance of quickfix?


by seckin at April 17, 2014 06:47 AM


Spring data neo4j GraphProperty annotation - missing index

When reading the documentation for GraphProperty one can find that adding this annotation to field Automatically indexes the property:

But it looks like it is not true (at least for 3.0.1). If I understand this right, SDN 3.0.1 uses label-based indexes by default. Here is my class:

object Neo4jAnnotations {
    type GraphId = @field
    type GraphProperty = @field
    type Fetch = @field
    type RelatedTo = @field
    type Id = @field
    type Indexed = @field
    type NodeEntity =

import Neo4jAnnotations._
case class FlightDesignator(@Indexed @GraphProperty(propertyType = classOf[String]) carrier: Carrier,
                            @GraphProperty(propertyType = classOf[java.lang.Integer]) flightNumber: FlightNumber,
                            @GraphProperty(propertyType = classOf[String]) suffix: Option[Suffix] = None,
                            @GraphId id: java.lang.Long = null) {

  private def this() = this(null, null, null)

and my configuration:

@EnableNeo4jRepositories(Array("persistence.common.repository", "persistence.set.repository", "persistence.list.repository"))
class Neo4jConfiguration extends {

  private final val Path = "/db/graph.db"

  Packages that specify where to look for entity classes, if missing SDN does not create label based indexes.
  setBasePackage("domain.common", "domain.set", "domain.list")

  def graphDatabaseService(): GraphDatabaseService = new GraphDatabaseFactory().newEmbeddedDatabase(Path);


Now, when I load my context:

val a = new AnnotationConfigApplicationContext(classOf[Neo4jConfiguration])

in my logs I can see:

16:18:26.233 [main] DEBUG o.s.d.n.s.schema.SchemaIndexProvider - CREATE INDEX ON :`FlightDesignator`(`carrier`)
16:18:26.233 [main] DEBUG o.s.d.n.s.query.CypherQueryEngine - Executing cypher query: CREATE INDEX ON :`FlightDesignator`(`carrier`) params {}
16:18:27.005 [main] DEBUG o.s.d.n.s.schema.SchemaIndexProvider - CREATE INDEX ON :`FlightDesignator`(`carrier`)
16:18:27.005 [main] DEBUG o.s.d.n.s.query.CypherQueryEngine - Executing cypher query: CREATE INDEX ON :`FlightDesignator`(`carrier`) params {}
16:18:27.315 [main] DEBUG o.s.d.n.s.schema.SchemaIndexProvider - CREATE INDEX ON :`FlightDesignator`(`carrier`)
16:18:27.316 [main] DEBUG o.s.d.n.s.query.CypherQueryEngine - Executing cypher query: CREATE INDEX ON :`FlightDesignator`(`carrier`) params {}
16:18:27.339 [main] DEBUG o.s.d.n.s.schema.SchemaIndexProvider - CREATE INDEX ON :`FlightDesignator`(`carrier`)
16:18:27.339 [main] DEBUG o.s.d.n.s.query.CypherQueryEngine - Executing cypher query: CREATE INDEX ON :`FlightDesignator`(`carrier`) params {}

and when I execute schema command using neo4j-shell I can see:

  ON :FlightDesignator(carrier) ONLINE  

but there are no indexes for other properties that are annotated with GraphProperty and w/o Indexed. Is it a bug or just someone forgot to update javadoc?

by Andna at April 17, 2014 06:31 AM

Behavior based on condition mentioned by user in scala?

I have an object Foo

object Foo extends RegexParsers{
  def apply(s:String): Array[String] = parseAll(record, line) match {
       // some logic 
   def record = repsep(mainToken, ",")
   // some more code
   def unquotes = "[^,]+".r 

Now this is pretty hardcoded for comma separated string..

I want to basically modify this function to basically account for another case (tab seperated)..

For which the following code works

object Foo extends RegexParsers{
      def apply(s:String): Array[String] = parseAll(record, line) match {
           // some logic 
       def record = repsep(mainToken, "\t") // change here
       // some more code
       def unquotes = "[^\t]+".r  // change here

Just two changes...

How do I merge these two changes.. where I can take this delimiter as an argument.. (default argument comma)... and then based on that.. execute the required code.. ?? Thanks

by Fraz at April 17, 2014 06:26 AM


Homotopy type theory and Gödel's incompleteness theorems

Kurt Gödel's incompleteness theorems establish the "inherent limitations of all but the most trivial axiomatic systems capable of doing arithmetic".

Homotopy Type Theory provides an alternative foundation for mathematics, a univalent foundation based on higher inductive types and the univalence axiom. The HoTT book explains that types are higher groupoids, functions are functors, type families are fibrations, etc.

The recent article "Formally Verified Mathematics" in CACM by Jeremy Avigad and John Harrison discusses HoTT with respect to formally verified mathematics and automatic theorem proving.

Do Gödel's incompleteness theorems apply to HoTT?

And if they do,

is homotopy type theory impaired by Gödel's incompleteness theorem (within the context of formally verified mathematics)?

by hawkeye at April 17, 2014 06:23 AM


Emacs lisp syntax highlighting bug on Github

Apparently you can't reopen closed issues on bitbucket, but I diagnosed a problem with pygment and parsing emacs lisp literal characters (ie ?\"). I'm hoping maybe with a few votes on the issue the maintainer will reopen the issue. That or someone with more python background than I do could jump in and fix the bug as a separate pull requests.

submitted by dgtized
[link] [1 comment]

April 17, 2014 06:18 AM


Chain multiple method calls using a map in Scala

I have the following code using Dispatch 0.11:

def myHttpPut(urlToPut: String, params: Map[String, String]): Future[Response] = {
  val req = url(urlToPut).PUT
  params.foreach { case (k, v) => req.addParameter(k, v) }

This does not work because addParameter does not modify req - instead it produces a new req object with the parameter added (which, in this case, is being thrown away). What's the most elegant way to write this so that I essentially loop over params, calling addParameter with each key/value pair of the map, building up req until I pass it into Http(req)?

by ghodss at April 17, 2014 06:07 AM


W-types vs Inductive types

Martin-Löf type theory uses W-types to define inductive structures like integers, lists, etc. However, calculus of inductive constructions doesn't use them in the same way, inductive types there seems to be more like axiom schemas.

Are these two approaches equivalent (they seem to be)? Are there any philosophical reasons why one is better than the other (for me, W-types feel like more intuitive, because the are just trees of special structure)? Which is easier from implementation point of view (inductive types seems to be better for me, since for W-types to be useful we need at least finite types and products to be available in a system's core)

by Konstantin Solomatov at April 17, 2014 06:07 AM

Language with extensible type system?

Is there a practical programming language that has an extensible type system? Or alternatively, an add-on type system that can be used with existing languages?

With extensible I mean that the typing rules are specified externally and can be changed by the user without rewriting the typechecking code.

The use would be to add additional checkable constraints to a program (for example physical dimensions, uniqueness types, reference frames, nulled memory etc.).

In the programming language part, one would annotate the expressions with their types:

    import units # library for physical units
    # x has type float and the type meter
    def f(x): return x*x

(I am using Python as an example language as the semantics are independent of the optional type annotations, but any language would do)

In the above case x has multiple types: float and a physical dimension. Each of those have different typing rules, and it makes sense to treat them differently. In the units type library, we would provide typing rules of the following form (assuming that the rules for float are defined elesewhere): $$\frac{\Gamma \vdash x:Meter, n:Nat, m:Nat}{\Gamma \vdash x^\frac{n}{m}:Meter^\frac{n}{m}}$$

While for example physical dimensions can be dealt with through a runtime library somewhat unsatisfactorily, doing dimensional analysis with the tools of types is much more elegant, doesn't incur any runtime overhead and proves dimensional correctness.

For example, the only mainstream language that I am aware of that provides physical dimensions is F#, but unfortunately it is broken as it does not support fractional exponents Providing the physical dimensions as a typing library instead of a language internal would allow the user to fix this problem and add other dimensional systems, like directional units or Siano's orientational dimensions.

The annotated AST and the typing rules would be processed by a constraint solver to issue type errors and warnings for insufficiently typed expressions.

Why am I asking for this?

  1. It would be great for understanding type systems
  2. Existing software could be made more secure by adding extra type constraints.

by user22525 at April 17, 2014 05:58 AM


Handling Faults in Akka actors

I've a very simple example where I've an Actor (SimpleActor) that perform a periodic task by sending a message to itself. The message is scheduled in the constructor for the actor. In the normal case (i.e., without faults) everything works fine.

But what if the Actor has to deal with faults. I've another Actor (SimpleActorWithFault). This actor could have faults. In this case, I'm generating one myself by throwing an exception. When a fault happens (i.e., SimpleActorWithFault throws an exception) it is automatically restarted. However, this restart messes up the scheduler inside the Actor which no longer functions as excepted. And if the faults happens rapidly enough it generates more unexpected behavior.

My question is what's the preferred way to dealing with faults in such cases? I know I can use Try blocks to handle exceptions. But what if I'm extending another actor where I cannot put a Try in the super class or some case when I'm an excepted fault happens in the actor.

import{Props, ActorLogging}
import scala.concurrent.duration._

case object MessageA

case object MessageToSelf

class SimpleActor extends Actor with ActorLogging {

  //schedule a message to self every second
  context.system.scheduler.schedule(0 seconds, 1 seconds, self, MessageToSelf)

  //keeps track of some internal state
  var count: Int = 0

  def receive: Receive = {
    case MessageA => {"[SimpleActor] Got MessageA at %d".format(count))
    case MessageToSelf => {
      //update state and tell the world about its current state 
      count = count + 1"[SimpleActor] Got scheduled message at %d".format(count))



class SimpleActorWithFault extends Actor with ActorLogging {

  //schedule a message to self every second
  context.system.scheduler.schedule(0 seconds, 1 seconds, self, MessageToSelf)

  var count: Int = 0

  def receive: Receive = {
    case MessageA => {"[SimpleActorWithFault] Got MessageA at %d".format(count))
    case MessageToSelf => {
      count = count + 1"[SimpleActorWithFault] Got scheduled message at %d".format(count))

      //at some point generate a fault
      if (count > 5) {"[SimpleActorWithFault] Going to throw an exception now %d".format(count))
        throw new Exception("Excepttttttiooooooon")


object MainApp extends App {
  implicit val akkaSystem =
  //Run the Actor without any faults or exceptions 

  //comment the above line and uncomment the following to run the actor with faults  


by Soumya Simanta at April 17, 2014 05:46 AM


Hamiltonian cycles in graphs of order n = 1 mod 4, (n-1)/2-regular? [on hold]

Consider a graph of order $n$ where $n \equiv 1 \mod 4$ (I.E. pentagons, nonagons, etc.), and suppose it is a $\frac{n-1}2$-regular graph. Also (potentially optionally) suppose that both it and its complement are connected.

Can this type of graph be verified to be Hamiltonian, and if so, how would the proof roughly go?

by user3537932 at April 17, 2014 05:39 AM


Common Attacks Against Package Management Systems

This paper reviews common attack vectors against package management systems, analyzes both APT and YUM, and points out a number of flaws in them. Many of the attacks discussed are general in nature and could be applied to any sort of ports, packages, or binary update system.

The source of the paper is doesn’t allow the submission of ftp:// links so I had to mirror it.


by metatron at April 17, 2014 05:30 AM


Recursively run through a vector in Clojure

I'm just starting to play with Clojure.

How do I run through a vector of items?

My naive recursive function would have a form like the classic map eg.

(defn map [f xs] (
  (if (= xs [])
      (cons (f (first xs)) (map f (rest xs))

The thing is I can't find any examples of this kind of code on the web. I find a lot of examples using built-in sequence traversing functions like for, map and loop. But no-one doing the raw recursive version.

Is that because you SHOULDN'T do this kind of thing in Clojure? (eg. because it uses lower-level Java primitives that don't have tail-call optimisation or something?)?

by interstar at April 17, 2014 05:12 AM

Update record perday interval

i am working on project using scala.I wanna make a method in scala which fetch value from mysql and perform some operation on that and agian insert into mysql,i am able to do that but my problem is that this method call per day interval. Is there any method in scala which solve this problem or i make Event Scheduler in mysql. Can any one solve this

Thanks in advance

by Rishi Dwivedi at April 17, 2014 05:06 AM



Planet Theory

The More Variables, the Better?

Ideas from algebraic geometry and arithmetic complexity


Hyman Bass is a professor of both mathematics and mathematics education at the University of Michigan, after a long and storied career at Columbia University. He was one of the first generation of mathematicians to investigate K-theory, and gave what is now the recognized definition of the first generation of K-groups, that is {K_1(R)} for a ring {R}. He co-founded with Jean-Pierre Serre a theory of graphs of groups, that is mappings associating a group to each vertex and edge of a graph. He is a past President of the American Mathematical Society—and exemplifies the idea that the AMS promotes mathematics at all levels: since 1996 he has worked on on elementary school math education with his Michigan colleague Deborah Ball.

Today Ken and I wish to discuss an idea used in a paper on the Jacobian Conjecture by Bass with Edwin Connell and David Wright.

One of the most powerful methods we use in theory is often the reduction of the dimension of a problem. The famous JL theorem and its relatives show us that often problems in high-dimensional Euclidean spaces can be reduced to much lower dimensional problems, with little error. This method can and has been used in many areas of theory to solve a variety of problems.

The idea used by Bass and many others is the opposite. Now we are interested in lifting a problem from a space to a much higher space. The critical intuition is that by moving your problem to a higher space there is more “room” to navigate, and this extra “space” allows operations to be performed that would not have been possible in the original lower-dimensional space. An easily quoted example is that the Poincaré property was proved relatively easily for spaces of dimension {d = 5} and higher, then for {d=4} in the 1980′s, and finally for {d=3} by Grigory Perelman drawing on Richard Hamilton.

Adding Variables

Let {F(x)} be a polynomial map from {\mathbb{Z}^{n}} to {\mathbb{Z}^{n}} where

\displaystyle  x = x_{1},\dots,x_{n}.

The Jacobian Conjecture (JC), which we have covered several times before, indeed recently, studies when such maps are injective. The trouble with the JC is quite simply that such maps can be very complex in their structure, which explains the reason JC remains open after over 70 years.

A very simple idea, almost trivial idea, is to replace {F(x)} by {G(x,y)} which is defined by

\displaystyle  G(x,y) = (F(x),y),

for new variables {y=y_{1},\dots,y_{m}}. For example if {F(x) = (x_{1},x_{1}+x_{2}^{7})} then {G} could be

\displaystyle  (x_{1},x_{1}+x_{2}^{7},y_{1},y_{2},y_{3}).

The new variables {y} are used in a “trivial” way. One reason the method is useful for the JC conjecture is that {F} is injective if and only if {G} is injective. This is trivial: the new variables do not interact in any manner with the original variables, and so the map {G} is injective precisely when {F} is.

Why is this of any use? Clearly {G} is really just {F} with an identity function on the variables {y} “tacked on” to it. How can this help?

Using The Extra Variables

The answer is that we can use the extra variables to modify the polynomial {G} so that it looks very different from {F}. Suppose that we replace {G} by {H = A \circ G} where {A} is a nice polynomial map. We may be able to vastly change the structure of {G} and get a {H} that is much simpler on some measure. The hope is that this restructuring will allow us to prove something about {H} that then implies something about {G}. This is exactly what happens in the study of the JC conjecture.

Here is a famous result from the Bass-Connell-Wright paper:

Theorem 1 Given {F(x)} there is an automorphism {A} so that {H=A \circ G} has degree at most three. Moreover {F} is injective if and only if {H} is.

This allows the JC question to be reduced to the study of cubic maps. Of course such low degree maps still are complex in their behavior, but the hope is that while they are on many more variables the restrict on their degree many make their structure easier to understand. This has yet to be fruitful—the JC remains open. The following quotation from the paper explains the idea:


General Use In Theory?

One idea is to try and use this stabilization philosophy to attack problems about polynomials that arise in complexity theory. For example can we use the addition of extra variables to attack the power of polynomial modulo composite numbers? Suppose that {F(x)} is a polynomial of many variables that modulo {pq} computes something interesting. What if we add extra variables as above, then rearrange the resulting polynomial to have a “better” structure? We must preserve not its injectivity, but for us its ability to compute something. If we can do this, then perhaps we can use the use the extra variables to obtain a lower bound.

Two simple examples from arithmetical complexity are aliasing variables in a formula to make it read-once, and the Derivative Lemma proved by Walter Baur and Volker Strassen. In the latter, the idea is to take a certain gate {g} of a circuit {C} computing a function {f}, and regard it as a new variable {x_{n+1}} in a circuit {C'}. The circuit {C'} computes a function {f'} of {n+1} variables such that

\displaystyle  f(x_1,\dots,x_n) = f'(x_1,\dots,x_n,g(x)).

In the Derivative Lemma the gate {g} is chosen to involve one-or-two input variables {x_i}, but the idea can be extended to other cases. When the construction is applied recursively we generate circuits {C'',C^{(3)},C^{(4)},\dots} in which the number of variables becomes higher and higher, but the circuits themselves become flatter and flatter, until only a “stardust” of many single-variable functions is left.

Open Problems

Are there general rules of thumb on when you should increase versus decrease the dimension? Or when to induct on the output gate(s) of a circuit, on the number of variables going down, or on the number of variables going up?

by Pip at April 17, 2014 04:45 AM


Where is the definition of class rqhead in freeBSD?

I am trying to change the kernel of freeBSD. There is a class named rqhead used in funtcions runq_choose() and class runq. I'm looking for the first definition of this class which is not defined in the class runq.h. Where is the definition of this struct and what is its name?

by ehsanik at April 17, 2014 04:41 AM


Is this implementation of a 3-bit least significant encoder correct?

Here's what I did:

and why the fuck are people downvoting this, i'm still within the policy of this subreddit, right?

submitted by TaysiirG
[link] [1 comment]

April 17, 2014 04:16 AM


Correct way of creating an ActorSystem inside an Actor

I'm using a third-party library (rediscala) to access a Redis DB inside my own Actor. Following is an example of how I'm currently doing it. Is this correct ? Are there any potential problems with the following code because I'm creating an akkaSystem inside my actor. If SimpleRedisClientActor crashes then I need to restart SimpleRedisClientActor which will create another Actor system. Should I override preStart and postStop ?

import{Props, Actor, ActorLogging}
import redis.RedisClient

import scala.util.{Failure, Success}

class SimpleRedisClientActor extends Actor with ActorLogging {

  implicit val akkaSystem =
  val redis = RedisClient()

  def receive: Receive = {

    case PingRedis => {
      val futurePong =
        case Success(s) =>"Redis replied back with a pong")
        case Failure(f) =>"Something was wrong ...")

object RedisActorDemo extends App {

  implicit val akkaSystem =
  val simpleActor = akkaSystem.actorOf(Props(classOf[SimpleRedisClientActor]))
  simpleActor ! PingRedis

object PingRedis

by Soumya Simanta at April 17, 2014 03:54 AM

pattern match in curry function: which paramter to match against

for the pattern match in curry function, why does it try to match the later parameter instead of the first paramter. eg as below, it tried to match the second parameter and the result is "a", but why not "b" which match the first parameter?

   def curr3(n:String):String=>String={
     case "a"=>"a"
     case "b"=>"b"

by Daniel Wu at April 17, 2014 03:41 AM

error: ':' expected but identifier found

Since type is a reserved word, I append an underscore when using it as an identifier. (I can find no style recommendation about this.)

val type_ = "abc"

But then I used it as an argument identifier.

def f(id: String, type_: String, typeName: String) = Map(
    "id" -> id,
    "type" -> type_,
    "typeName" -> typeName

println(f("a", "simple", "test"))

But I get an error

error: identifier expected but 'type' found.
def f(type: String) = 1

Putting a space between type_ and : fixes it

def f(id: String, type_ : String, typeName: String)

though this goes against the recommended Scala style.

Is this a bug in the Scala compiler? I haven't been able to find any sort of grammar for Scala syntax, so I can't be sure.

by Paul Draper at April 17, 2014 03:33 AM



Reduction from 3 Dimensional Matching to Magnets

To elaborate a bit on MAGNETS. Imagine you have a pile of fridge magnets M and a list of words that can be spelled with the magnets called W. Given a set of magnets M and a list of words W, can you create a list of words called A such that each word in A is in W and A uses every magnet available.

The goal is to prove that MAGNETS is NP-Complete. I've got a certificate and verification algorithm that are correct and the professor has given us the hint that we are allowed to use the fact that 3D Matching is NP-Complete but I can't think of a transformation from 3D Matching to Magnets. Any help to point me in the right direction would be appreciated?

by user1217222 at April 17, 2014 02:59 AM


Factor Clojure code setting many different fields in a Java object using a parameter map bound to a var or local

I would like to set a group of fields in a Java object from Clojure without using reflection at runtime.

This solution (copied from one of the solutions) is close to what I am after:

(defmacro set-all! [obj m]
    `(do ~@(map (fn [e] `(set! (. ~obj ~(key e)) ~(val e))) m) ~obj))

(def a (java.awt.Point.))
(set-all! a {x 300 y 100})

This works fine but I want the macro to be able to process a map of fields and values passed in as a var or as a local binding (i.e. not passed directly to the macro as above). The fields should be represented as keywords so the following should work:

(def a (java.awt.Point.))
(def m {:x 300 :y 100})
(set-all! a m)

I can't figure out how to do this using set! and the special dot form within a macro (or any solution that works as above without using reflection at runtime).

by optevo at April 17, 2014 02:50 AM


American Swaption Pricing with Monte-Carlo method

I want to price an American swaption but I am not sure about what I am doing.

Tree methods and PDE discretization seem difficult to adapt to a swaption. I am trying a Monte-Carlo approach. (in another subject I am trying a PDE approach.

First I have american option retrograde equations (timestep $\delta$t):

$$ V_t = max(\phi(S_t), E(e^{-r \delta t} V_{t+\delta t} | F_t ) $$ $$ V_T = \phi(S_T) $$

(source: my old courses)

And Black's formula for an European call swaption:

$$ C_t = (\delta \sum_{j=n+1}^{M+1} Z_t^{T_j})[R(t,T_n,T_m) \Phi(d_1) - \hat{R} \Phi(d_2)] $$


Here are my questions:

1) Is it possible to mix american option retrograde equation with the Black's formula ? What do I need to use for the payoff $\phi$ ? for the expectation (under probability ?) ?

2) What do I need then ? I think the next step is to introduce a model for r, Z or R, calibrate it and then I can simulate it and go for the classical monte carlo method for american option. What are my options now ?

3) Is there any better MC method (QMC or Longshaft-Schwartz) wich would be more adapted ?

I have asked another question to the community about PDE Pricing for American swaption: American Swaption Pricing with PDE discretization

by lmorin at April 17, 2014 02:48 AM


make.conf: libjpeg-turbo

Is there any way to make it a default in my make.conf such that port builds stop pulling in regular graphics/jpeg?

submitted by benfranklingates
[link] [comment]

April 17, 2014 02:21 AM


Category theory and graphs

Could most categories , or a finite part of them be represented on a subset of a complete graph of N vertices (Kn) which is connected. and partly directed? Could all the axioms of category theory be written for such graphs?

by user128932 at April 17, 2014 02:12 AM

DragonFly BSD Digest

One weird trick for dports

Remember: If you have a particular port that’s not building in DragonFly, there may be a patch in pkgsrc that could be brought over, as John Marino points out.

by Justin Sherrill at April 17, 2014 02:10 AM

Planet Clojure

Separation of Presentation and Content

Summary: One reason to separate style from content is to reuse HTML or CSS. Ultimately, we would like a solution where we can reuse both.

Reusable Content

There is an economic reason to separate presentation from content. Publishers have thousands of pages of HTML on their site, yet they want to enhance the style of their pages over time. It would cost a lot of money to change every single page to match their new style. So they invest a little more time writing each page so that the HTML markup does not refer to styles but to the semantics of the content (referred to as semantic HTML). Then they hire a designer to write CSS to make their existing content look new. The HTML is permanent and reusable, and the CSS is temporary and not-reusable. The separation is only one way: the HTML doesn't know the CSS, but the CSS does know the HTML.

Examples: CSS Zen Garden, newspaper websites, blogs

Characteristics: Semantic markup, CSS tailored to classes/structure of HTML

Reusable Styles

Yet another economic reason is a relatively newer phenomenon. It has become very easy to create a new web site/application. Writing (or generating) lots of HTML is cheap, and it changes often during iterative development. What is relatively expensive is to design each of those pages each time the pages change. CSS is not good at adapting to page structure changes. So people have built CSS frameworks where the CSS is (relatively) permanent and the HTML is temporary. In these cases, the HTML knows the CSS, but the CSS doesn't know the HTML. The separation is again one way--this time the other way.

Examples: Open Source CSS, Bootstrap, Foundation, Pure

Characteristics: HTML tailored to classes/structure of CSS, Reusable CSS

Reusable Content and Styles

What if a newspaper site, with millions of existing HTML pages, could cheaply take advantage of the reusable styles of frameworks like Bootstrap? That is the Holy Grail of separation of concerns. What would be required to do that?

What we really want is a two-way separation. We want HTML written in total isolation and CSS written in total isolation. We want permanent HTML and permanent CSS. How can the style and content, each developed separately, finally be brought together? The answer is simple: a third document to relate the two.

We have already seen that CSS is not good at abstraction. CSS cannot name a style to use it later. However, LESS does have powerful forms of abstraction. LESS has the ability to define reusable styles and apply them to HTML that did not have those styles in mind. If you put the definition of reusable styles in one document and the application of those styles in another document, you achieve true separation. And it is already happening a little bit. You can do it in your own code.

It is a bit like a software library. We put the reusable bits in the library, and their specific use in the app.

Examples: Compass, Semantic Grid System

Characteristics: Semantic markup, Reuseable Styles, Tie-in document to relate Style to Content


CSS preprocessors, which began as convenience tools, is actually powerful enough to solve fundamental problems with HTML and CSS. While it is still early, LESS and other CSS preprocessors, if harnessed correctly, could dramatically transform how we build and design web sites. Typography, grids and layout, and other design concerns can be used as plugable libraries. And other languages that are specifically designed to do that may emerge. What would a systematic, analytical approach to such an approach look like?


You may also be interested in the Clojure Gazette, a free, weekly email newsletter to inspire Clojure programmers.

by LispCast at April 17, 2014 01:56 AM



Help with programming problem??

Hello everyone, I am trying to prepare for a programming competition, and I need help on this problem. Any help is appreciated! Thanks!

A positive integer is said to be a palindrome with respect to base b, if its representation in base b reads the same from left to right as from right to left. Palindromes are formed as follows: Given a number, reverse its digits and add the resulting number to the original number. If the result isn't a palindrome, repeat the process. For example, start with 87 base 10. Applying this process, we obtain: 87 + 78 = 165 165 + 561 = 726 726 + 627 = 1353 1353 + 3531 = 4884, a palindrome

Whether all numbers eventually become palindromes under this process is unproved, but all base 10 numbers less than 10,000 have been tested. Every one becomes a palindrome in a relatively small number of steps (of the 900 3-digit numbers, 90 are palindromes to start with and 735 of the remainder take fewer than 5 reversals and additions to yield a palindrome). Except, that is, for 196. Although no proof exists that it will not produce a palindrome, this number has been carried through to produce a 2 million-digit number without producing a palindrome.

INPUT: five base 10 positive integers

OUTPUT: Print the palindrome produced. If no palindrome is produced after 10 additions, print the word “none” and the last sum.


  1. 87 1. 4884
  2. 196 2. NONE, 18211171
  3. 1689 3. 56265
submitted by stockinvestor
[link] [3 comments]

April 17, 2014 01:44 AM


Making code more scala idiomatic

I came across following java like code in Scala project. how to make it more Scala idiomatic with no side effects (exception handling appropriately) ?

I am thinking to use scalaz disjunction / (I know I can use Scala either too but like right biased more I guess). in a function there are a few such if checks(one above is one example) which throw one or the other type of exceptions. how to make such code more Scala idiomatic?

EDIT: Question is not around how to convert Java null checks into Scala Idiomatic, which I am already doing. for e.g. following

hpi.fold(throw new Exception("Instance not found for id " + processInstanceId)) { h =>
  val pi = new ProcessInstance(taskResponse)

Now return type of the existing function is some value say "ProcessInstance" for e.g. but in my opinion is misleading. caller would never know if this will throw an exception so my question is more around returning Either[Error,Value] from such functions. And if I have a few such exceptions being captured into a single function, how to accumulate them all and reflect into return type?

by user2066049 at April 17, 2014 01:34 AM


Determining the minimum vertex cover in a bipartite graph from a maximum flow/matching using the residual network rather than alternating paths

Wikipedia shows how one can determine the minimum vertex cover in a bipartite graph ($G(X \cup Y, E)$) in polytime from a maximum flow using alternating paths. However, I read that the (S,T) cut (extracted from the final residual network) can also be used to determine the minimum vertex cover:

$$(X\cap T)\cup(Y\cap S)$$

If this expression is a correct alternative, I don't have an intuition for why it's true. The best intuition I've been able to come up with is: Select each vertex on the left (X) that has a positive flow leading up to it and select each vertex on the right if there is no flow leading up to it. Why is this set equal to the minimum vertex cover?

by Wuschelbeutel Kartoffelhuhn at April 17, 2014 01:33 AM


Testing the validity of a factor model for stock returns

Consider the following m regression equation system:

$$r^i = X^i \beta^i + \epsilon^i \;\;\; \text{for} \;i=1,2,3,..,n$$

where $r^i$ is a $(T\times 1)$ vector of the T observations of the dependent variable, $X^i$ is a $(T\times k)$ matrix of independent variables, $\beta^i$ is a $(k\times1)$ vector of the regression coefficients and $\epsilon^i$ is the vector of errors for the $T$ observations of the $i^{th}$ regression.

My question is: in order to test the validity of this model for stock returns (i.e. the inclusion of those explanatory variables) using AIC or BIC criterion, should these criterion be computed on a time-series basis (i.e. for each stock), or on a cross-sectional basis (and then averaged over time)?

by Mariam at April 17, 2014 01:33 AM


Play / Anorm. Best perfomance

What is best practices to getting best perfomance to executing sql statements in Anorm?

I want to get maximal perfomance from my DB calls.

What is appropriate way to load data from a database and process it (object mapping) as soon as possible?

Should I still use Anorm or I need to use something else, may be "native" jdbc driver?

UPDATE I use PostgreSQL.

My database contains "offices" table.

In this table exists 4 rows.

When I run query

order by

in PgAdmin then execution time is 1-30 ms.

So, in Scala code i created case class Office:

case class Office(

Also, I created OfficeDBO:

case class OfficeDBO(
  id: Long,
  name: String,
  phone: String,
  address: String

object OfficeDBO {
  val Mapping = {
    get[Long]("office__id") ~
    get[String]("office__name") ~
    get[String]("office__phone") ~
    get[String]("office__address") map {
      case officeId ~ officeName ~ officePhone ~ officeAddress => OfficeDBO(

  def toOffice(officeDBO: OfficeDBO): Office = {
    new Office(,,,

And, service method OfficesService.findOffices():

def findOffices(): List[Office] = {
    LOGGER.debug("Find offices")

    DB.withConnection(implicit connection => {
        order by
      .as(OfficeDBO.Mapping *)

The method OfficesService.findOffices() execution at 7 seconds!

Where is the problem? How to refactor the code that getting maximal perfomance?

by jxcoder at April 17, 2014 01:32 AM

Planet Clojure

Hitchhiker’s Guide to Clojure

Hitchhiker’s Guide to Clojure

From the webpage:

The following is a cautionary example of the unpredictable combination of Clojure, a marathon viewing of the BBC’s series “The Hitchhiker’s Guide to the Galaxy”, and a questionable amount of cheese.

There have been many tourism guides to the Clojure programming language. Some that easily come to mind for their intellectual erudition and prose are “The Joy of Touring Clojure”, “Touring Clojure”, “Clojure Touring”, and the newest edition of “Touring Clojure Touring”. However, none has surpassed the wild popularity of “The Hitchhiker’s Guide to Clojure”. It has sold over 500 million copies and has been on the “BigInt’s Board of Programming Language Tourism” for the past 15 years. While, arguably, it lacked the in-depth coverage of the other guides, it made up for it in useful practical tips, such as what to do if you find a nil in your pistachio. Most of all, the cover had the following words printed in very large letters: Don’t Worry About the Parens.

To tell the story of the book, it is best to tell the story of two people whose lives were affected by it: Amy Denn, one of the last remaining Pascal developers in Cincinnati, and Frank Pecan, a time traveler, guidebook researcher, and friend of Amy.

There isn’t any rule (that I’m aware of) that says computer texts must be written to be unfunny.

I think my only complaint is that the story is too short. ;-)


by Patrick Durusau at April 17, 2014 01:31 AM

arXiv Cryptography and Security

A Bitcoin system with no mining and no history transactions:Build a compact Bitcoin system. (arXiv:1404.4275v1 [cs.CE])

This article changes a lot of the original Bitcoin system, including, fast currency distribution within 1 year by utilizing buyer's different characters, removing bloated history transactions from data synchronization, no mining, no blockchain, it's environmentally friendly, no checkpoint, it's purely decentralized and purely based on proof of stake. The logic is very simple and intuitive, 51% stakes talk. In aspect of security, we propose TILP & SSS strategies to secure our system. We utilize high credit individual as initial source of credit, taking Google Company as an example.

by <a href="">Qian Xiaochao</a> at April 17, 2014 01:30 AM

Managing Change in Graph-structured Data Using Description Logics. (arXiv:1404.4274v1 [cs.AI])

In this paper, we consider the setting of graph-structured data that evolves as a result of operations carried out by users or applications. We study different reasoning problems, which range from ensuring the satisfaction of a given set of integrity constraints after a given sequence of updates, to deciding the (non-)existence of a sequence of actions that would take the data to an (un)desirable state, starting either from a specific data instance or from an incomplete description of it. We consider an action language in which actions are finite sequences of conditional insertions and deletions of nodes and labels, and use Description Logics for describing integrity constraints and (partial) states of the data. We then formalize the above data management problems as a static verification problem and several planning problems. We provide algorithms and tight complexity bounds for the formalized problems, both for an expressive DL and for a variant of DL-Lite.

by <a href="">Magdalena Ortiz</a>, <a href="">Mantas Simkus</a>, <a href="">Diego Calvanese</a>, <a href="">Shqiponja Ahmetaj</a> at April 17, 2014 01:30 AM

Witness structures and immediate snapshot complexes. (arXiv:1404.4250v1 [cs.DC])

This paper deals with mathematics of immediate snapshot read/write shared memory communication model. Specifically, we introduce and study combinatorial simplicial complexes, called immediate snapshot complexes, which correspond to the standard protocol complexes in that model.

In order to define the immediate snapshot complexes we need a~new mathematical object, which we call a witness structure. We develop the rigorous mathematical theory of witness structures and use it to prove several combinatorial as well as topological properties of the immediate snapshot complexes.

by <a href="">Dmitry N. Kozlov</a> at April 17, 2014 01:30 AM

Broder's Chain Is Not Rapidly Mixing. (arXiv:1404.4249v1 [cs.DM])

We prove that Broder's Markov chain for approximate sampling near-perfect and perfect matchings is not rapidly mixing for Hamiltonian, regular, threshold and planar bipartite graphs, filling a gap in the literature. In the second part we experimentally compare Broder's chain with the Markov chain by Jerrum, Sinclair and Vigoda from 2004. For the first time, we provide a systematic experimental investigation of mixing time bounds for these Markov chains. We observe that the exact total mixing time is in many cases significantly lower than known upper bounds using canonical path or multicommodity flow methods, even if the structure of an underlying state graph is known. In contrast we observe comparatively tighter upper bounds using spectral gaps.

by <a href="">Annabell Berger</a>, <a href="">Steffen Rechner</a> at April 17, 2014 01:30 AM

An Approach to Assertion-based Debugging of Higher-Order (C)LP Programs. (arXiv:1404.4246v1 [cs.PL])

Higher-order constructs extend the expressiveness of first-order (Constraint) Logic Programming ((C)LP) both syntactically and semantically. At the same time assertions have been in use for some time in (C)LP systems helping programmers detect errors and validate programs. However, these assertion-based extensions to (C)LP have not been integrated well with higher-order to date. This paper contributes to filling this gap by extending the assertion-based approach to error detection and program validation to the higher-order context within (C)LP. We propose an extension of properties and assertions as used in (C)LP in order to be able to fully describe arguments that are predicates. The extension makes the full power of the assertion language available when describing higher-order arguments. We provide syntax and semantics for (higher-order) properties and assertions, as well as for programs which contain such assertions, including the notions of error and partial correctness and provide some formal results. We also discuss several alternatives for performing run-time checking of such programs.

by <a href="">Nataliia Stulova</a>, <a href="">Jos&#xe9; F. Morales</a>, <a href="">Manuel V. Hermenegildo</a> at April 17, 2014 01:30 AM

From advertising profits to bandwidth prices-A quantitative methodology for negotiating premium peering. (arXiv:1404.4208v1 [cs.NI])

We have developed a first of its kind methodology for deriving bandwidth prices for premium direct peering between Access ISPs (A-ISPs) and Content and Service Providers (CSPs) that want to deliver content and services in premium quality. Our methodology establishes a direct link between service profitability, e.g., from advertising, user- and subscriber-loyalty, interconnection costs, and finally bandwidth price for peering. Unlike existing work in both the networking and economics literature, our resulting computational model built around Nash bargaining, can be used for deriving quantitative results comparable to actual market prices. We analyze the US market and derive prices for video that compare favorably with existing prices for transit and paid peering. We also observe that the fair prices returned by the model for high-profit/low-volume services such as search, are orders of magnitude higher than current bandwidth prices. This implies that resolving existing (fierce) interconnection tussles may require per service, instead of wholesale, peering between A-ISPs and CSPs. Our model can be used for deriving initial benchmark prices for such negotiations.

by <a href="">Laszlo Gyarmati</a>, <a href="">Nikolaos Laoutaris</a>, <a href="">Kostas Sdrolias</a>, <a href="">Pablo Rodriguez</a>, <a href="">Costas Courcoubetis</a> at April 17, 2014 01:30 AM

A qutrit Quantum Key Distribution protocol with better noise resistance. (arXiv:1404.4199v1 [quant-ph])

The Ekert quantum key distribution protocol uses pairs of entangled qubits and performs checks based on a Bell inequality to detect eavesdropping. The 3DEB protocol uses instead pairs of entangled qutrits to achieve better noise resistance than the Ekert protocol. It performs checks based on a Bell inequality for qutrits named CHSH-3. In this paper, we present a new protocol, which also uses pairs of entangled qutrits, but achieves even better noise resistance than 3DEB. This gain of performance is obtained by using another inequality called here hCHSH-3. As the hCHSH3 inequality involve products of observables which become incompatible when using quantum states, we show how the parties running the protocol can measure the violation of hCHSH3 in the presence of noise, to ensure the secrecy of the key.

by <a href="">Fran&#xe7;ois Arnault</a>, <a href="">Zo&#xe9; Amblard</a> at April 17, 2014 01:30 AM

Factor Complexity of S-adic sequences generated by the Arnoux-Rauzy-Poincar\'e Algorithm. (arXiv:1404.4189v1 [cs.DM])

The Arnoux-Rauzy-Poincar\'e multidimensional continued fraction algorithm is obtained by combining the Arnoux-Rauzy and Poincar\'e algorithms. It is a generalized Euclidean algorithm. Its three-dimensional linear version consists in subtracting the sum of the two smallest entries to the largest if possible (Arnoux-Rauzy step), and otherwise, in subtracting the smallest entry to the median and the median to the largest (the Poincar\'e step), and by performing when possible Arnoux-Rauzy steps in priority. After renormalization it provides a piecewise fractional map of the standard $2$-simplex. We study here the factor complexity of its associated symbolic dynamical system, defined as an $S$-adic system. It is made of infinite words generated by the composition of sequences of finitely many substitutions, together with some restrictions concerning the allowed sequences of substitutions expressed in terms of a regular language. Here, the substitutions are provided by the matrices of the linear version of the algorithm. We give an upper bound for the linear growth of the factor complexity. We then deduce the convergence of the associated algorithm by unique ergodicity.

by <a href="">Val&#xe9;rie Berth&#xe9;</a>, <a href="">S&#xe9;bastien Labb&#xe9;</a> at April 17, 2014 01:30 AM

A Deterministic TCP Bandwidth Sharing Model. (arXiv:1404.4173v1 [cs.NI])

Traditionally TCP bandwidth sharing has been investigated mainly by stochastic approaches due to its seemingly chaotic nature. Even though of great generality, the theories deal mainly with expectation values, which is prone to misinterpretation with respect to the Quality-of-Experience (QoE). We disassemble TCP operating conditions into dominating scenarios and show that bandwidth sharing alone follows mostly deterministic rules. From the analysis we derive significant root causes of well-known TCP aspects like unequal sharing, burstiness of losses, global synchronization, and on buffer sizing. We base our model on a detailed analysis of bandwidth sharing experiments with subsequent mathematical reproduction.

by <a href="">Wolfram Lautenschlaeger</a> at April 17, 2014 01:30 AM

Multiplicative weights in monotropic games. (arXiv:1404.4163v1 [cs.GT])

We introduce a new class of population games that we call monotropic; these are games characterized by the presence of a unique globally neutrally stable Nash equilibrium. Monotropic games generalize strictly concave potential games and zero sum games with a unique minimax solution. Within the class of monotropic games, we study a multiplicative weights dynamic. We show that, depending on a parameter called the learning rate, multiplicative weights are interior globally convergent to the unique equilibrium of monotropic games, but may also induce chaotic behavior if the learning rate is not carefully chosen.

by <a href="">Ioannis Avramopoulos</a> at April 17, 2014 01:30 AM

An Optimized and Scalable Eigensolver for Sequences of Eigenvalue Problems. (arXiv:1404.4161v1 [cs.MS])

In many scientific applications the solution of non-linear differential equations are obtained through the set-up and solution of a number of successive eigenproblems. These eigenproblems can be regarded as a sequence whenever the solution of one problem fosters the initialization of the next. In addition, some eigenproblem sequences show a connection between the solutions of adjacent eigenproblems. Whenever is possible to unravel the existence of such a connection, the eigenproblem sequence is said to be a correlated. When facing with a sequence of correlated eigenproblems the current strategy amounts to solving each eigenproblem in isolation. We propose a novel approach which exploits such correlation through the use of an eigensolver based on subspace iteration and accelerated with Chebyshev polynomials (ChFSI). The resulting eigensolver, is optimized by minimizing the number of matvec multiplications and parallelized using the Elemental library framework. Numerical results shows that ChFSI achieves excellent scalability and is competitive with current dense linear algebra parallel eigensolvers.

by <a href="">Mario Berljafa</a> (1), <a href="">Daniel Wortmann</a> (2), <a href="">Edoardo Di Napoli</a> (2 and 3) ((1) The University of Manchester, (2) Forschungszentrum Juelich, (3) AICES, RWTH Aachen) at April 17, 2014 01:30 AM

SWAPHI: Smith-Waterman Protein Database Search on Xeon Phi Coprocessors. (arXiv:1404.4152v2 [cs.DC] UPDATED)

The maximal sensitivity of the Smith-Waterman (SW) algorithm has enabled its wide use in biological sequence database search. Unfortunately, the high sensitivity comes at the expense of quadratic time complexity, which makes the algorithm computationally demanding for big databases. In this paper, we present SWAPHI, the first parallelized algorithm employing Xeon Phi coprocessors to accelerate SW protein database search. SWAPHI is designed based on the scale-and-vectorize approach, i.e. it boosts alignment speed by effectively utilizing both the coarse-grained parallelism from the many co-processing cores (scale) and the fine-grained parallelism from the 512-bit wide single instruction, multiple data (SIMD) vectors within each core (vectorize). By searching against the large UniProtKB/TrEMBL protein database, SWAPHI achieves a performance of up to 58.8 billion cell updates per second (GCUPS) on one coprocessor and up to 228.4 GCUPS on four coprocessors. Furthermore, it demonstrates good parallel scalability on varying number of coprocessors, and is also superior to both SWIPE on 16 high-end CPU cores and BLAST+ on 8 cores when using four coprocessors, with the maximum speedup of 1.52 and 1.86, respectively. SWAPHI is written in C++ language (with a set of SIMD intrinsics), and is freely available at this http URL

by <a href="">Yongchao Liu</a>, <a href="">Bertil Schmidt</a> at April 17, 2014 01:30 AM

Information Hiding and Attacks : Review. (arXiv:1404.4141v1 [cs.CR])

Information Hiding is considered very important part of our lives. There exist many techniques for securing the information. This paper briefs on the techniques for information hiding and the potential threats to those methods. This paper briefs about cryptanalysis and stegananlysis, two methods for breaching into the security methods.

by <a href="">Richa Gupta</a> at April 17, 2014 01:30 AM

Mechanism Design for Mobile Geo-Location Advertising. (arXiv:1404.4106v1 [cs.GT])

Mobile geo-location advertising, where mobile ads are targeted based on a user's location, has been identified as a key growth factor for the mobile market. As with online advertising, a crucial ingredient for their success is the development of effective economic mechanisms. An important difference is that mobile ads are shown sequentially over time and information about the user can be learned based on their movements. Furthermore, ads need to be shown selectively to prevent ad fatigue. To this end, we introduce, for the first time, a user model and suitable economic mechanisms which take these factors into account. Specifically, we design two truthful mechanisms which produce an advertisement plan based on the user's movements. One mechanism is allocatively efficient, but requires exponential compute time in the worst case. The other requires polynomial time, but is not allocatively efficient. Finally, we experimentally evaluate the tradeoff between compute time and efficiency of our mechanisms.

by <a href="">Nicola Gatti</a>, <a href="">Marco Rocco</a>, <a href="">Sofia Ceppi</a>, <a href="">Enrico H. Gerding</a> at April 17, 2014 01:30 AM

Revisiting the enumeration of all models of a Boolean 2-CNF. (arXiv:1208.2559v2 [cs.CC] UPDATED)

An O(Nn + n^2) time algorithm to enumerate all N models of a Boolean 2-CNF with n variables is presented. Using don't care symbols the models are output in clusters rather than one by one. Computer experiments confirm the high efficiency of the method.

by <a href="">Marcel Wild</a> at April 17, 2014 01:30 AM

On Bisimulations for Description Logics. (arXiv:1104.1964v5 [cs.LO] UPDATED)

We formulate bisimulations for useful description logics. The simplest among the considered logics is $\mathcal{ALC}_{reg}$ (a variant of propositional dynamic logic). The others extend that logic with inverse roles, nominals, quantified number restrictions, the universal role, and/or the concept constructor for expressing the local reflexivity of a role. They also allow role axioms. We give results about invariance of concepts, TBoxes and ABoxes, preservation of RBoxes and knowledge bases, and the Hennessy-Milner property w.r.t. bisimulations in the considered description logics. Using the invariance results we compare the expressiveness of the considered description logics w.r.t. concepts, TBoxes and ABoxes. Our results about separating the expressiveness of description logics are naturally extended to the case when instead of $\mathcal{ALC}_{reg}$ we have any sublogic of $\mathcal{ALC}_{reg}$ that extends $\mathcal{ALC}$. We also provide results on the largest auto-bisimulations and quotient interpretations w.r.t. such equivalence relations. Such results are useful for minimizing interpretations and concept learning in description logics. To deal with minimizing interpretations for the case when the considered logic allows quantified number restrictions and/or the constructor for the local reflexivity of a role, we introduce a new notion called QS-interpretation, which is needed for obtaining expected results. By adapting Hopcroft's automaton minimization algorithm, we give an efficient algorithm for computing the partition corresponding to the largest auto-bisimulation of a finite interpretation.

by <a href="">Ali Rezaei Divroodi</a>, <a href="">Linh Anh Nguyen</a> at April 17, 2014 01:30 AM

Using Elimination Theory to construct Rigid Matrices. (arXiv:0910.5301v3 [cs.CC] UPDATED)

The rigidity of a matrix A for target rank r is the minimum number of entries of A that must be changed to ensure that the rank of the altered matrix is at most r. Since its introduction by Valiant (1977), rigidity and similar rank-robustness functions of matrices have found numerous applications in circuit complexity, communication complexity, and learning complexity. Almost all nxn matrices over an infinite field have a rigidity of (n-r)^2. It is a long-standing open question to construct infinite families of explicit matrices even with superlinear rigidity when r = Omega(n).

In this paper, we construct an infinite family of complex matrices with the largest possible, i.e., (n-r)^2, rigidity. The entries of an n x n matrix in this family are distinct primitive roots of unity of orders roughly exp(n^2 log n). To the best of our knowledge, this is the first family of concrete (but not entirely explicit) matrices having maximal rigidity and a succinct algebraic description.

Our construction is based on elimination theory of polynomial ideals. In particular, we use results on the existence of polynomials in elimination ideals with effective degree upper bounds (effective Nullstellensatz). Using elementary algebraic geometry, we prove that the dimension of the affine variety of matrices of rigidity at most k is exactly n^2-(n-r)^2+k. Finally, we use elimination theory to examine whether the rigidity function is semi-continuous.

by <a href="">Abhinav Kumar</a>, <a href="">Satyanarayana V. Lokam</a>, <a href="">Vijay M. Patankar</a>, <a href="">Jayalal Sarma M. N</a> at April 17, 2014 01:30 AM


Way to mention null-parameter in documentation?

I am developing a kind of API and wondering which way would be generally preferred to mention about null-parameter:

A. Write such like @throw NullPointerException if P is null assuming all method whose doc says nothing about it will accept null-parameter:

/* Does something...
/* @param p the paramteter to do something...
/* @throws NullPointerException if p is null

B. Write such like this parameter can be null assuming null-parameter is not accepted without any mention about it:

/* Does something...
/* @param p the parameter... this can be null

In common sense I feel like A is more legit, but it's actually painful to write about it all the time.

Which way would you choose?

Thank you!

by Ryoichiro Oka at April 17, 2014 01:23 AM

sbt test:doc Could not find any member to link

I'm attempting to run sbt test:doc and I'm seeing a number of warnings similar to below:

[warn] /Users/tleese/code/my/stuff/src/test/scala/com/my/stuff/common/tests/util/NumberExtractorsSpecs.scala:9: Could not find any member to link for "".

The problem appears to be that Scaladoc references from test sources to main sources are not able to link correctly. Any idea what I might be doing wrong or need to configure?

Below are the relevant sections of my Build.scala:

val docScalacOptions = Seq("-groups", "-implicits", "-external-urls:[urls]")

scalacOptions in (Compile, doc) ++= docScalacOptions
scalacOptions in (Test, doc) ++= docScalacOptions
autoAPIMappings := true

by Taylor Leese at April 17, 2014 01:18 AM


About computer science and category theory [duplicate]

This question already has an answer here:

I read that Category Theory has alot to do with how programs and information can be organised.Can Category theory simplify various programming strategies? If a specific Category is represented as a directed graph is this similar to flow charts used in programming?

by user128932 at April 17, 2014 01:15 AM


Building trees from preorder and inorder list of values.

I am to draw out the tree that results from these traversals. I thought I had a good grasp on these traversals as it does not seem that complicated after looking at the wikipedia page. I do not understand how these are valid trees unless they are very off centered from the root. Could someone give me some guidance and maybe explain how these start to build? Thanks

Pre Order: 14, 24, 34, 20, 36, 42, 22, 39, 27, 18, 5, 17, 11, 7, 15

In Order: 20, 34, 36, 24, 22, 42, 39, 14, 5, 18, 17, 27, 7, 11, 15

submitted by rezaw
[link] [1 comment]

April 17, 2014 12:59 AM