# Planet Primates

## March 10, 2014

### /r/compsci

#### Help running a script?

I know this is probably a laughably basic question, but I'm pretty illiterate when it comes to this computer science stuff (though I'm working on it!). I'm trying to install this program onto anaconda, but I have no idea what to do with the script. Everything I've looked into just assumes you just run it but I think I'm missing something... Anyone have any input?

submitted by yungkef

### UnixOverflow

#### How to see the output produced by make install in freebsd

When in freebsd you install any software using

cd /usr/ports/mysql56-server
make install


It produces a lot of output on the screen.

How you can read that output latter in a file.

I tried

cd /usr/ports/mysql56-server
make install > /home/mysql.install.log


BUT IT FAILED -( ........... Any Suggestion??

### Planet Theory

#### TR14-033 | Candidate Weak Pseudorandom Functions in $\mathrm{AC}0 \circ \mathrm{MOD}2$ | Siyao Guo, Adi Akavia, Andrej Bogdanov, Alon Rosen, Akshay Kamath

Pseudorandom functions (PRFs) play a fundamental role in symmetric-key cryptography. However, they are inherently complex and cannot be implemented in the class $\mathrm{AC}^0( \mathrm{MOD}_2)$. Weak pseudorandom functions (weak PRFs) do not suffer from this complexity limitation, yet they suffice for many cryptographic applications. We study the minimal complexity requirements for constructing weak PRFs. To this end 1. We conjecture that the function family $F_A(x) = g(Ax)$, where $A$ is a random square $GF(2)$ matrix and $g$ is a carefully chosen function of constant depth, is a weak PRF. In support of our conjecture, we show that functions in this family are inapproximable by $GF(2)$ polynomials of low degree and do not correlate with any fixed Boolean function family of subexponential size. 2. We study the class $\mathrm{AC}0 \circ \mathrm{MOD}_2$ that captures the complexity of our construction. We conjecture that all functions in this class have a Fourier coefficient of magnitude $\exp(- \poly \log n)$ and prove this conjecture in the case when the $\mathrm{MOD}_2$ function is typical. 3. We investigate the relation between the hardness of learning noisy parities and the existence of weak PRFs in $\mathrm{AC}0 \circ \mathrm{MOD}_2$. We argue that such a complexity-driven approach can play a role in bridging the gap between the theory and practice of cryptography.

### StackOverflow

#### how to convert a Iterator[Long] to Iterator[String] in scala

I have a requirement where I have to convert a Iterator[Long] to Iterator[String] in scala. Please let me know how can I do it

Thanks

#### Clojure: Write a Clojure function that returns a list of all 2^n bits strings of length n

I want to write a Clojure function called (nbits n) that returns a list of all 2^n bits strings of length n.

My expected output is:

user=> (nbits -2)
()

user=> (nbits 0)
()

user=> (nbits 1)
((0) (1))

user=> (nbits 2)
((0 0) (0 1) (1 0) (1 1))

user=> (nbits 3)
((0 0 0) (0 0 1) (0 1 0) (0 1 1) (1 0 0) (1 0 1) (1 1 0) (1 1 1))


Here is my try:

(defn add0 [seq]
(cond (empty? seq) 'nil
(and (seq? seq) (> (count seq) 0))
(cons (cons '0 (first seq)) (add0 (rest seq)))))

(cond (empty? seq) 'nil
(and (seq? seq) (> (count seq) 0))
(cons (cons '1 (first seq)) (add1 (rest seq)))))

(defn nbits [n]
(cond (number? n)
(cond (< n 1) '()
(= n 1) '((0) (1))
(> n 1)
:else 'nil))


But the output is not right. Where did I go wrong? I don't know how to fix it.

#### How to append to a nested list in a Clojure atom?

I want to append a value to a list in a Clojure atom:

(def thing (atom {:queue '()}))


I know when it's not an atom, I can do this:

(concat '(1 2) '(3))


How can I translate that into a swap! command?

Note: I asked a similar question involving maps: Using swap to MERGE (append to) a nested map in a Clojure atom?

#### SCTP INIT failed

I have two virtual machine, both installed FreeBSD 10 / i386 / with generic KERNEL (the host is CentOS 6.5 x86-64 with KVM)

The first virtual machine named freetest0 and second is freetest1

freetest0 = freebsd 10 / i386 / the IF is vtnet2 192.168.6.100 freetest1 = freebsd 10 / i386 / the IF is vtnet2 192.168.6.110

I want to test the speed between to freetest(s) IF. But, the problem is they cannot get connected by SCTP. the TCP and UDP are well.

Whatever I use iperf3 (with SCTP support) and netperfmeter, they cannot get connected by SCTP.

#the server is freetest1
root@freetest1:~ # netstat -an -f inet
Active Internet connections (including servers)
tcp46      0      0 *.9000                 *.*                    LISTEN
tcp4       0      0 192.168.0.110.22       192.168.0.1.39754      ESTABLISHED
tcp4       0      0 192.168.0.110.22       192.168.0.1.39752      ESTABLISHED
tcp4       0      0 127.0.0.1.25           *.*                    LISTEN
tcp4       0      0 *.22                   *.*                    LISTEN
udp46      0      0 *.9000                 *.*
udp4       0      0 *.514                  *.*
Active SCTP associations (including servers)
sctp46 1to1  fe80::5054:ff:fe.9000                         LISTEN
192.168.8.110.9000
fe80::5054:ff:fe.9000
192.168.6.110.9000
fe80::5054:ff:fe.9000
192.168.0.110.9000
127.0.0.1.9000
fe80::1.9000
::1.9000
sctp46 1toN  fe80::5054:ff:fe.9001                         LISTEN
192.168.8.110.9001
fe80::5054:ff:fe.9001
192.168.6.110.9001
fe80::5054:ff:fe.9001
192.168.0.110.9001
127.0.0.1.9001
fe80::1.9001
::1.9001

root@freetest0:~ # netperfmeter 192.168.6.110:9000
Network Performance Meter - Version 1.0
---------------------------------------

Active Mode:
- Measurement ID  = 4bc75bae
- Control Address = 192.168.6.110:9001 - connecting ...

#<cannot get connected>

root@freetest0:~ # tcpdump -i vtnet2
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on vtnet2, link-type EN10MB (Ethernet), capture size 65535 bytes
14:11:03.839031 IP 192.168.6.100.55228 > 192.168.6.110.5201: Flags [S], seq 1318388212, win 65535, options [mss 1460,nop,wscale 6,sackOK,TS val 2652085 ecr 0], length 0
14:11:03.868787 IP 192.168.6.110.5201 > 192.168.6.100.55228: Flags [R.], seq 0, ack 1318388213, win 0, length 0
14:11:35.235362 IP 192.168.6.100.52018 > 192.168.6.110.9001: sctp (1) [INIT] [init tag: 3995201801] [rwnd: 1864135] [OS: 10] [MIS: 2048] [init TSN: 332259025]
14:11:38.256378 IP 192.168.6.100.52018 > 192.168.6.110.9001: sctp (1) [INIT] [init tag: 3995201801] [rwnd: 1864135] [OS: 10] [MIS: 2048] [init TSN: 332259025]
14:11:40.256418 IP 192.168.6.100.52018 > 192.168.6.110.9001: sctp (1) [INIT] [init tag: 3995201801] [rwnd: 1864135] [OS: 10] [MIS: 2048] [init TSN: 332259025]
14:11:44.256099 IP 192.168.6.100.52018 > 192.168.6.110.9001: sctp (1) [INIT] [init tag: 3995201801] [rwnd: 1864135] [OS: 10] [MIS: 2048] [init TSN: 332259025]
14:11:52.254442 IP 192.168.6.100.52018 > 192.168.6.110.9001: sctp (1) [INIT] [init tag: 3995201801] [rwnd: 1864135] [OS: 10] [MIS: 2048] [init TSN: 332259025]

root@freetest1:~ # tcpdump -i vtnet2
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on vtnet2, link-type EN10MB (Ethernet), capture size 65535 bytes
14:11:35.979349 IP 192.168.6.100.52018 > 192.168.6.110.9001: sctp (1) [INIT] [init tag: 3995201801] [rwnd: 1864135] [OS: 10] [MIS: 2048] [init TSN: 332259025]
14:11:39.000411 IP 192.168.6.100.52018 > 192.168.6.110.9001: sctp (1) [INIT] [init tag: 3995201801] [rwnd: 1864135] [OS: 10] [MIS: 2048] [init TSN: 332259025]
14:11:41.000495 IP 192.168.6.100.52018 > 192.168.6.110.9001: sctp (1) [INIT] [init tag: 3995201801] [rwnd: 1864135] [OS: 10] [MIS: 2048] [init TSN: 332259025]
14:11:45.000116 IP 192.168.6.100.52018 > 192.168.6.110.9001: sctp (1) [INIT] [init tag: 3995201801] [rwnd: 1864135] [OS: 10] [MIS: 2048] [init TSN: 332259025]
14:11:52.998491 IP 192.168.6.100.52018 > 192.168.6.110.9001: sctp (1) [INIT] [init tag: 3995201801] [rwnd: 1864135] [OS: 10] [MIS: 2048] [init TSN: 332259025]


#### How to print to stdout in Clojure without returning nil

I am making a website and I want to know if there is any way to return a string of integers from a vector, each in their own line. When I rig my code to use apply str to the output I can get an something in the browser that goes from this -> [1 2 3 4] to this -> 1 2 3 4. But I want it to look like this:

1
2
3
4
.
.
.


When I try to get each element on its own line using println, pprint or format, I get blank under the results header in the browser. I assume this is because all of those return nil. Is there some way to get the formatting I need, so that the results of my output can be easily copied and pasted into an excel file without the user having to format it by hand?

### CompsciOverflow

#### What happens on a Cache miss?

In the present day processors more than one level of memory is present for trying for the realization of an ideal memory system and to do more work for clock cycle more than one instruction is in the pipeline of execution by exploiting the ILP (Instruction Level Parallelism).

My question is, upon a cache miss, what happens? i.e., whether the instructions prior to and instructions after the instruction causing the cache miss are stalled or only the instructions after the instruction causing cache miss are stalled?

I know that the cases may arise depending on whether the processor is having speculative and out-of-order execution and if the processor also have the ability to exploit the MLP (Memory Level Parallelism).

I want to know about the cases of processor having MLP and not having it

I was not able to find helpful information.

#### find a minimal vertex in a tree from which we can traverse some edges exactly twice then come back to that vertex then do it with the rest of edges

by minimal, i mean that the difference of numbers of two subsets of edges have to be minimum.

### StackOverflow

#### group by list on same field with different case classes scala

how can i do to grouped two list that contains different case classes, but on both classes have same field:

case class X(a:Long, b:Long, c:Long, d:Long)
case class Y(a:Long, e:Long, f :Long)

val i = List(X(10,10,8,8))
val j = List(Y(10,10,8))

val joined = a++b
joined.groupBy(_.a)


error: value a is not a member of Product with Serializable

thanks

I am very new to Scala, and would appreciate any help (have looked everywhere and spent the last 8 hours trying to figure this out)

Currently I have

def apply(file: String) : Iterator[String] =  {
scala.io.Source.fromFile(file).getLines().map(_.toLowerCase)
}


As well as

def groupFreq[A,B](xs: Iterator[A], f: A => B): HashMap[B, Int] = {
var freqMap = new HashMap[B, Int]
for (x <- xs) freqMap = freqMap + ( f(x) -> ( freqMap.getOrElse( f(x) , 0 ) +1 )  )
freqMap
}


apply just takes a file of words that we pass in.

GroupFreq takes xs: Iterator[A] and a grouping function f that converts A values to their B groups. The function returns a HashMap that for each B group, counts the number of A values that fell into the group.

I use both of these functions, to help me with charFreq, a function that uses both apply and groupFreq to pass back a HashMap that counts how many times a Char appears throughout the entire file. If the char does not appear anywhere in the file, then there should be no mapping for it.

def charFreq(file: String): HashMap[Char, Int] =
{
var it = Iterator[Char]()
val words = apply(file)
for {
xs<-words
} yield { it = it ++ xs.toIterator }

val chars   = it
val grouper = (x: Char) => x
groupFreq(chars, grouper)
}


My solution compiles and apply and groupFreq work as intended, but when I run charFreq, it says

I believe I'm doing something wrong, most likely with my for loop and yield, but I've gone through the logic many times and I don't get why it doesn't work.

Google and StackOverflow has recommended flatmaps, but I coulnd't get that to work either.

Any help would be appreciated. Keep in mind this is a class assignment with the skeleton methods set up, so I cannot change the way apply and groupFreq and charFreq are set up, I can only manipulate the bodies which I have tried to do.

#### how to write hadoop map reduce programs in scala

I am writing a map reduce application scala. Till map function everything works fine. But while writing the reducer I am facing problem.

override def reduce(key: Text, values: java.lang.Iterable[Text] , context: ReducerContext){ } The ReducerContext is defined such that is refers to the context inner class, so I am fine here.

The issue is with the Iterable(Java) component.I am not able to iterate through it. I understand that first I have convert it into a scala Iterable and then iterate over it, I also did that but still didnt get get result.

I appreciate any help on this.

Thanks

#### How do I break out of a loop in Scala?

How do I break out a loop (for problem 4 of Project Euler)?

var largest=0
for(i<-999 to 1 by -1) {
for (j<-i to 1 by -1) {
val product=i*j
if (largest>product)
// I want to break out here
else
if(product.toString.equals(product.toString.reverse))
largest=largest max product
}
}


How do I turn nested for loops into tail recursion?

From Scala Talk at FOSDEM 2009 http://www.slideshare.net/Odersky/fosdem-2009-1013261 on the 22nd page:

Break and continue Scala does not have them. Why? They are a bit imperative; better use many smaller functions Issue how to interact with closures. They are not needed!

What is the explanation?

#### Scala Project Visualisation

In Java I often liked to look at UML class diagrams to get an overview of a project.

My bachelor thesis now deals with the implementation of a project in Scala and I want to present an overview of the project-hierarchy. What are appropriate means to such? Are there any tools? As far as I have heard UML seems not to be suited for scala.

### CompsciOverflow

#### Transitive Closure

I am solving an ISM (Interpretive Structural Modeling) problem and i need to find the reachability matrix by verifying the transitivity. My final reachability matrix is filled with "1s" which I cannot find any matrix like that in the examples. Is there a way my matrix is wrong or I am doing someting wrong?

### StackOverflow

#### Baking-Pi Challenge - Understanding & Improving

I spent some time yesterday writing the solution for this[0] challenge and was able to get through it without cheating, but I was left with a couple of questions. Reference material here[1].

This is my code.

(ns baking-pi.core
(:import java.math.MathContext))

(defn modpow [n e m]
(.modPow (biginteger n) (biginteger e) (biginteger m)))

(defn div [top bot]
(with-precision 34 :rounding HALF_EVEN
(/ (bigdec top) (bigdec bot))))

(defn pow [n e]
(.pow (bigdec n) (bigdec e) MathContext/DECIMAL128))

(defn round
([n] (.round (bigdec n) MathContext/DECIMAL128))
([n & args] (->> [n args] (flatten) (map round))))

(defn left [n d]
(letfn [(calc [k] (let [bot (+' (*' 8 k) d)
top (modpow 16 (-' n k) bot)]
(div top bot)))]
(->> (inc' n)
(range 0)
(map calc)
(reduce +'))))

(defn right [n d]
(letfn [(calc [[sum'' sum' k]]
(let [sum' (if (nil? sum') 0M sum')
top (pow 16 (-' n k))
bot (+' (*' k 8) d)
delta (div top bot)]
[sum' (+' sum' delta) (inc' k)]))
(pred [[sum'' sum' k]]
(cond (or (nil? sum'') (nil? sum')) true
(apply == (round sum'' sum')) false
:else true))]
(->> [nil nil (inc' n)]
(iterate calc)
(drop-while pred)
(first)
(second))))

(defn bbp [n]
(letfn [(part [m d] (*' m (+' (left n d) (right n d))))]
(let [sum (-' (part 4 1) (part 2 4) (part 1 5) (part 1 6))]
(-> sum
(-' (long sum))
(*' 16)
(mod 16)
(Long/toHexString)))))


I have 2 questions.

1. The wiki makes the following statement. Since my calculation is accurate up to 34 digits after the decimal, how can I leverage it to produce more hexadecimal digits of PI per bbp call?

in theory, the next few digits up to the accuracy of the calculations used would also be accurate

2. My algorithm relied on BigInteger's modPow for modular exponentiation (based on the following quote), and BigDecimals everywhere else. It is also slow. Bearing in mind that I don't want to lose meaningful accuracy per question #1, what is the best way to speed this program up and make it valid clojurescript as well as clojure?

To calculate 16 n − k mod (8k + 1) quickly and efficiently, use the modular exponentiation algorithm.

EDIT: Changed from 3 questions to 2. Managed to answer first question on my own.

### QuantOverflow

#### Tracking delistings on NASDAQ & NYSE

Does anyone know of a webpage (or webpages) of current delistings for NASDAQ & NYSE?

### UnixOverflow

#### BSD browser information

I am developing an application and I need some BSD browser information.

Specifically the output of http://browserspy.dk/browser.php

If someone could post the output of Firefox on FreeBSD.

OpenBSD and Dragonfly or NetBSD would help too if anybody has them installed but I mainly need the FreeBSD information.

### TheoryOverflow

#### What is the complexity of this path problem?

Instance: An undirected graph $G$ with two distinguished vertices $s\neq t$, and an integer $k\geq 2$.

Question: Does there exist an $s-t$ path in $G$, such that the path touches at most $k$ vertices? (A vertex is touched by the path if the vertex is either on the path, or has a neighbor on the path.)

### CompsciOverflow

#### Amortized Cost of a delete from an extendable heap

Given an extendable heap with n elements and an array size of A, I'm trying to use the accounting method to find the amortized cost of a delete. We want a load factor of $\frac{1}{4}$.

So, the minimum number of deletes before we need to shrink the array is $\lceil\frac{4n -L}{4}\rceil$.

The cost of shrinking at that point would be $\lfloor\frac{n}{4}\rfloor$

So, I'm thinking that the amortized cost would be $\lfloor\frac{n}{4}\rfloor/\lceil\frac{4n -L}{4}\rceil$

However, given that n changes each time we do a delete, does it mean that the charge for a delete is not constant but variable? In most of the examples I've seen, the charge is a constant.

### StackOverflow

#### Scala: Issue w/ sorting

Sort of new to Scala.. Say I have an array:

1,2,3,4,5,6,7,8,9,10


And would like to get back all the numbers from 6

How would I achieve this in Scala?

### CompsciOverflow

#### Is the algorithm implemented by git bisect optimal?

Let $G$ be a DAG. We know that some nodes in the DAG are "bad", while the others are "good"; a descendant of a bad node is bad while the ancestor of a good node is good. We also know that bad nodes have a unique minimal element in the DAG which we'd like to find querying as few nodes as possible with queries of the type "Are you good or bad?".

This problem is solved in Git, the popular version control system, by the command git-bisect, which helps a programmer find the first commit in which a bug was introduced.

At the start, the algorithm implemented by Git assumes to know a single bad commit and one or more bad commits. At each step of its execution, the algorithm finds a commit using the following steps (taken from here):

1. Keep only the commits that:

a) are ancestor of the bad commit (including the bad commit itself),

b) are not ancestor of a good commit (excluding the good commits).

2. Starting from the good ends of the graph, associate to each commit the number of ancestors it has plus one.

3. Associate to each commit $\min(X, N-X)$, where $X$ is the value associated to the commit in step 2. and $N$ is the total number of commits in the graph.

4. The best bisection point is the commit with the highest associated number.

This algorithm is essentially finding the commit that achieves the "worst best case": in fact, $\min(X,N-X)$ is the number of nodes in the DAG at the next iteration in the best case, thus $\max\min(X,N-X)$ is the worst best case.

I'm wondering:

• Does it make any difference if we select the "best worst case", that is, the node that achieves $\min\max(X,N-X)$?
• Is this algorithm optimal?

### StackOverflow

#### What does it mean that RACStream represents a monad?

From the doc RACStream represents a "Monad"? Could somebody explain what this specifically means in the context of RACStream. I looked up the functional meaning on wiki but I am having difficulties seeing how it benefits Reactive-Cocoa and why this pattern was chosen?

### CompsciOverflow

#### Lamport's Byzantine Generals Algorithm

I've stumbled at the first OralMessage algorithm in Lamport, et al's paper.

I've searched the web and there are dozens of sites, restating in exactly the same terms and examples, which isn't helping me.

Lamport claims the algorithm can handle (n-1)/3 traitors, and works when the commander is a traitor.

My restatement of the algorithm:

1. The commander sends a value to each of the lieutenants.(round 0)

2. Each lieutenant: forwards each message he receives to the other lieutenants:
don't forward messages if they already have (N - 1)/3 names. (eg N=10 and you receive 'gcd0')
add your name to front of message before forwarding (eg you are b and receive 'c0', send 'bc0')

3. after all messages have been sent, each lieutenant:
examines the received messages and makes their decision.
if its a tie, then decide 0.

I'm not sure how to do 3, the paper says the algorithm "assumes a sequence of [majority] functions" (nested?)
In the example, I'm assuming to take the majority in each vector of round 2 (ie left to right), and then take the majority of these.

EXAMPLE

Commander is a traitor, N=7, M=(7-1)/3=2, so 6 lieutenants one of whom is a traitor. I have assigned the lieutenants letters b-g.

Here are the messages received at each node in rounds 1 & 2, assuming a node can send to itself. (the messages in brackets are redundant from B's point of view. I don't know if this is important.):

<(b1) ,c0   ,d0   ,eX   ,f1   ,g1   >

<     ,(cb1),(db1),(ebX),(fb1),(gb1)>
<(bc0),     ,dc0  ,ecX  ,fc0  ,gc0  >
<(bd0),cd0  ,     ,edX  ,fd0  ,gd0  >
<(beX),ceX  ,deX  ,     ,feX  ,geX  >
<(bf1),cf1  ,df1  ,efX  ,     ,gf1  >
<(bg1),cg1  ,dg1  ,egX  ,fg1  ,     >


Note:
'dc0' is sent to everyone by 'd' (because 'd' was the last to prepend their name)
'X' indicates an unreliable message. 'e' is a traitor and always sends unreliable messages

BUT 'step 3' gives 1,0,0,X,1,1 which is no better than round 1.
AND the majority of these is 1 if X is 1, and 0 if X is 0. So the traitor can confound us.

What am I doing wrong?

### Planet Clojure

#### Clojure Gazette 1.69

Clojure Gazette 1.69
 Summers,

# Issue 1.69 March 09, 2014

### Editorial

Hello!

Enjoy the issue!

Sincerely,
Eric Normand
<ericwnormand@gmail.com>

The Developer to Watch

## Zach Tellman

Zach Tellman has created a handful of libraries that make Clojure development faster and closer to the metal without sacrificing the abstractions we have all come to rely on in Clojure. Check out his Github repos. He's also interesting on Twitter. Check out his blog. Also, get in touch with him and let him know I sent you!

Summer of Code

## Typed Clojure Request for Mentors for GSOC 2014

Ambrose Bonnaire-Sergeant is administering several Google Summer of Code projects related to Typed Clojure. He is looking for mentors who can take a small amount responsibility for the students. There are several exciting projects that students will tackle this summer, all promising to improve the Clojure experience. If you're interested, reply to the mailing list.

core.async

## Working with core.async

This is a series of three (and maybe more to come) experience reports using core.async.

Legend

## Tony Hoare Bibliography

I came across this gem of a page when searching for information about Tony Hoare. Apparently, he's still doing research and publishing at Oxford. This bibliography has links to many papers in PDF form. Start with The Emperor's Old Clothes (his Turing Award Lecture). After that, maybe Communicating Sequential Processes, the source of the idea behind core.async.

Symbolic computation

## The Secret Life of a Mathematica Expression

David Leibs explores the proprietary language that comes with Mathematica. I've been thinking about Mathematica when I saw a preview video of the Wolfram Language.

### StackOverflow

#### Running ScalaTest in Play Framwork 2.2.X

I'm trying to run some functional ScalaTest in Play 2.2 but can't seem to be able to import de @Test annotation needed to run the tests. I've tried looking for solutions but they seem to be from different versions since none work with my 2.2 version.

Can anyone guide me on running a ScalaTest some on play?

### CompsciOverflow

#### What is the best way to index lookups on a 2D array of integers that is boundless in x and y?

Lets say you have a data model that consists of a 2D grid of integer points. This grid is sparsely populated and boundless in x and y (up to the max of a 32-bit integer).

What is the best way to index these points in order to have an optimised lookup on an arbitrary (x,y) coordinate? Is an O(1) lookup solution possible?

#### Time complexity of a compiler

I am interested in the time complexity of a compiler. Clearly this is a very complicated question as there are many compilers, compiler options and variables to consider. Specifically, I am interested in LLVM but would be interested in any thoughts people had or places to start research. A quite google seems to bring little to light.

My guess would be that there are some optimisation steps which are exponential, but which have little impact on the actual time. Eg, exponential based on the number are arguments of a function.

From the top of my head, I would say that generating the AST tree would be linear. IR generation would require stepping through the tree while looking up values in ever growing tables, so $O(n^2)$ or $O(n\log n)$. Code generation and linking would be a similar type of operation. Therefore, my guess would be $O(n^2)$, if we removed exponentials of variables which do not realistically grow.

I could be completely wrong though. Does anyone have any thoughts on it?

#### Why Deterministic PDA accepts $\epsilon$ input but DFA not

I was going through a deterministic PDA that accepts $wcw^R$ (described in Ullman's textbook), in which the last transition is given as $(q_1,\epsilon, Z_0)\to(q_2,Z_0)$, where $q_2$ is the final state.

In DFAs we don't consider $\epsilon$ transitions, while in PDAs we do include them. Why?

#### Relationship between Las Vegas algorithms and deterministic algorithms

I'm wondering why the following argument doesn't work for showing that the existence of a Las Vegas algorithm also implies the existence of a deterministic algorithm:

Suppose that there is a Las Vegas algorithm $A$ that solves some graph problem $P$, i.e., $A$ takes an $n$-node input graph $G$ as input (I'm assuming the number of edges is $\le n$) and eventually yields a correct output, while terminating within time $T(G)$ with some nonzero probability.

Suppose that there is no deterministic algorithm that solves $P$. Let $A^\rho$ be the deterministic algorithm that is given by running the Las Vegas algorithm $A$ with a fixed bit string $\rho$ as its random string. Let $k=k(n)$ be the number of $n$-node input graphs (with $\le n$ edges). Since there is no deterministic algorithm for $P$, it follows that, for any $\rho$, the deterministic algorithm $A^\rho$ fails on at least one of the $k$ input graphs. Returning to the Las Vegas algorithm $A$, this means that $A$ has a probability of failure of $\ge 1/k$, a contradiction to $A$ being Las Vegas.

### StackOverflow

#### Using swap to MERGE (append to) a nested map in a Clojure atom?

Let's say I have an atom that contains a map like this:

{:count 0 :map hash-map}


How can I use swap to merge another key-value pair onto :map?

### CompsciOverflow

#### Data Compression: Which one is Better "Compress before Encrypt" or "Encrypt before Compression"

Which one is better approach do data compression before encrypting it or Encrypt data before compression ? I think it is biased towards the application requirement. Please share your views.

### TheoryOverflow

#### Guess the bound of Lovász Path Removal Conjecture

Kawarabayashi [1] al et proved that:

Theorem: There exists a function $f(k)=O(k^4)$ such that the following holds: for any two vertices $s$ and $t$ of an $f(k)$-connected graph $G$, there exists an induced $s-t$ path $P$ such that $G-E(P)$ is $k$-connected.

Lovász Path Removal Conjecture: There exists a function $f=f(k)$ such that the following holds. For every $f(k)$-connected graph $G$ and two vertices $s$ and $t$ in $G$, there exists a path $P$ with endpoints $s$ and $t$ such that $G-V(P)$ is $k$-connected.

I guess in the Lovász Path Removal Conjecture, $f(k)=o(k^5)$. Is that right?

[1] Kawarabayashi K, Lee O, Reed B, et al. A weaker version of Lovász'path removal conjecture[J]. Journal of Combinatorial Theory, Series B, 2008, 98(5): 972-979.

### CompsciOverflow

#### Should O(1) necessarily stand for a non-zero constant?

I had a debate with my friend. He argued that $o(1)\subseteq O(1)$, so if a function converges to 0, then it belongs to both $o(1)$ and $O(1)$. However I imagine that $O(1)$ represents a constant time, in essence, a non-zero constant time. Is there a broad acceptance that a function converging to zero belongs to $o(1)$ and not to $O(1)$?

### StackOverflow

#### return vector without nth element (complement of (nth))

I need to get a random value from a vector of integers.

I also need to get back the vector without the value key.

Example code is below. I know I can easily put this code in a function to reuse.

But wonder if there is some function or a better way to create a new vector without the nth element? (thus the complement of the (nth) function in core clojure)

(let [ vector [1 2 3 5 9 1]
id (rand-int (count vector))
value (nth vector id)
tail (drop (+ id 1) vector)
{:value value :new-vector vector})


### Lambda the Ultimate Forum

#### Limitations of FRP?

I'm doing some research on programming paradigms that deal with time, like functional reactive programming, and I was wondering if anyone has explored how far one can go with FRP? From my own personal experience when doing Superglue (an OO reactive PL based on FRP-like signals), I had to abandon FRP when working on incremental compilation and language-aware editing: although FRP should be suitable for this problem (lots of consistency to maintain between code, AST, type system, and UI), it turned out that the problem required adding things to collections in distributed locations (e.g. in parse tree computations), whereas FRP seems to require everything for some state to specified centrally.

The best source I can find for pushing FRP for more complex programming domains is Antony Courtney's thesis. This still seems to be where things are at.

The last time we discussed this topic was in 2007. Are there any new developments in FRP expressiveness?

### Dave Winer

#### Change coming in RSS feed

At some point, soon, the Scripting News feed will have items without titles. The body of these items are in their descriptions, as explained in the RSS 2.0 spec.

An item may represent a "story" -- much like a story in a newspaper or magazine; if so its description is a synopsis of the story, and the link points to the full story. An item may also be complete in itself, if so, the description contains the text (entity-encoded HTML is allowed; see examples), and the link and title may be omitted. All elements of an item are optional, however at least one of title or description must be present.

#### How we got here

Scripting News started in this format, and Frontier News before it.

Manila, the first CMS to produce RSS feeds, supported title-less items in feeds.

I felt this was okay as long as Twitter held promise for being a revolutionary Internet-scale notification service with a powerful API. But they've backed off that. Their service hasn't improved in a long time. I didn't realize how much I missed doing the intermediate-length posts until I started using Facebook regularly. But stuff I post there has no lasting value. So I need a better place for that kind of writing, so why not use my own blog? Of course that's the right answer.

#### Undoing the mistake

I'm undoing the mistake I made in 2006. And that means you may either find that your RSS reader supports my feed, or it doesn't. I'm not going to let them hold me back. If you can't read my feed in their tools, then you can switch to one that works properly, read the site in a web browser, or don't read it at all.

I'm sorry it has to be this way, but reader developers have been deciding arbitrarily not to support an important part of the RSS standard. I want to use the feature, I was using it long before any of them existed, and it's easy for them to support. Just a little bit of thinking and a little bit of coding.

### arXiv Cryptography and Security

#### An Expressive Model for the Web Infrastructure: Definition and Application to the BrowserID SSO System. (arXiv:1403.1866v1 [cs.CR])

The web constitutes a complex infrastructure and as demonstrated by numerous attacks, rigorous analysis of standards and web applications is indispensable.

Inspired by successful prior work, in particular the work by Akhawe et al. as well as Bansal et al., in this work we propose a formal model for the web infrastructure. While unlike prior works, which aim at automatic analysis, our model so far is not directly amenable to automation, it is much more comprehensive and accurate with respect to the standards and specifications. As such, it can serve as a solid basis for the analysis of a broad range of standards and applications.

As a case study and another important contribution of our work, we use our model to carry out the first rigorous analysis of the BrowserID system (a.k.a. Mozilla Persona), a recently developed complex real-world single sign-on system that employs technologies such as AJAX, cross-document messaging, and HTML5 web storage. Our analysis revealed a number of very critical flaws that could not have been captured in prior models. We propose fixes for the flaws, formally state relevant security properties, and prove that the fixed system in a setting with a so-called secondary identity provider satisfies these security properties in our model. The fixes for the most critical flaws have already been adopted by Mozilla and our findings have been rewarded by the Mozilla Security Bug Bounty Program.

#### Verification of A Security Adaptive Protocol Suite Using SPIN. (arXiv:1403.1846v1 [cs.NI])

The advancement of mobile and wireless communication technologies in recent years introduced various adaptive protocols to adapt the need for secured communications. Security is a crucial success factor for any communication protocols, especially in mobile environment due to its ad hoc behavior. Formal verification plays an important role in development and application of safety critical systems. Formalized exhausted verification techniques to analyze the security and the safety properties of communications protocols increase and confirm the protocol confidence. SPIN is a powerful model checker that verifies the correctness of distributed communication models in a rigorous and automated fashion. This short paper proposes a SPIN based formal verification approach of a security adaptive protocol suite. The protocol suite includes a neighbor discovery mechanism and routing protocol. Both parts of the protocol suite are modeled into SPIN and exhaustively checked various temporal properties which ensure the applicability of the protocol suite in real-life applications.

#### Cooperative Simultaneous Localization and Tracking in Mobile Agent Networks. (arXiv:1403.1824v1 [cs.IT])

We introduce a framework and methodology of cooperative simultaneous localization and tracking (CoSLAT) in decentralized mobile agent networks. CoSLAT provides a consistent combination of cooperative self-localization (CSL) and distributed target tracking (DTT). Multiple mobile targets and mobile agents are tracked using pairwise measurements between agents and targets and between agents. We propose a distributed CoSLAT algorithm that combines particle-based belief propagation with the likelihood consensus scheme and performs a bidirectional probabilistic information transfer between CSL and DTT. Simulation results demonstrate significant improvements in both self-localization and target tracking performance compared to separate CSL and DTT.

#### Gray Codes and Overlap Cycles for Restricted Weight Words. (arXiv:1403.1818v1 [math.CO])

A Gray code is a listing structure for a set of combinatorial objects such that some consistent (usually minimal) change property is maintained throughout adjacent elements in the list. While Gray codes for m-ary strings have been considered in the past, we provide a new, simple Gray code for fixed-weight m-ary strings. In addition, we consider a relatively new type of Gray code known as overlap cycles and prove basic existence results concerning overlap cycles for fixed-weight and weight-range m-ary words.

#### High-Order Splitting Methods for Forward PDEs and PIDEs. (arXiv:1403.1804v1 [q-fin.CP])

This paper is dedicated to the construction of high-order (in both space and time) finite-difference schemes for both forward and backward PDEs and PIDEs, such that option prices obtained by solving both the forward and backward equations are consistent. This approach is partly inspired by Andreasen & Huge, 2011 who reported a pair of consistent finite-difference schemes of first-order approximation in time for an uncorrelated local stochastic volatility model. We extend their approach by constructing schemes that are second-order in both space and time and that apply to models with jumps and discrete dividends. Taking correlation into account in our approach is also not an issue.

#### A tight lower bound for Szemer\'edi's regularity lemma. (arXiv:1403.1768v1 [math.CO])

Addressing a question of Gowers, we determine the order of the tower height for the partition size in a version of Szemer\'edi's regularity lemma.

#### Continuous Features Discretization for Anomaly Intrusion Detectors Generation. (arXiv:1403.1729v1 [cs.NI])

Network security is a growing issue, with the evolution of computer systems and expansion of attacks. Biological systems have been inspiring scientists and designs for new adaptive solutions, such as genetic algorithms. In this paper, we present an approach that uses the genetic algorithm to generate anomaly net- work intrusion detectors. In this paper, an algorithm propose use a discretization method for the continuous features selected for the intrusion detection, to create some homogeneity between values, which have different data types. Then,the intrusion detection system is tested against the NSL-KDD data set using different distance methods. A comparison is held amongst the results, and it is shown by the end that this proposed approach has good results, and recommendations is given for future experiments.

#### Do Google Trend data contain more predictability than price returns?. (arXiv:1403.1715v1 [q-fin.TR])

Using non-linear machine learning methods and a proper backtest procedure, we critically examine the claim that Google Trends can predict future price returns. We first review the many potential biases that may influence backtests with this kind of data positively, the choice of keywords being by far the greatest culprit. We then argue that the real question is whether such data contain more predictability than price returns themselves: our backtest yields a performance of about 17bps per week which only weakly depends on the kind of data on which predictors are based, i.e. either past price returns or Google Trends data, or both.

#### Methods of executable code protection. (arXiv:1403.1694v1 [cs.PL])

The article deals with the problems in constructing a protection system of executable code. The techniques of breaking the integrity of executable code and ways to eliminate them are described. The adoption of virtual machine technology in the context of executable code protection from analysis is considered. The substantiation of the application of virtual machines as the best way to oppose the analysis of executable code is made. The protection of executable code by transferring the protected code in a virtual execution environment is considered. An efficient implementation of the method is proposed.

#### LTLf satisfiability checking. (arXiv:1403.1666v1 [cs.LO])

We consider here Linear Temporal Logic (LTL) formulas interpreted over \emph{finite} traces. We denote this logic by LTLf. The existing approach for LTLf satisfiability checking is based on a reduction to standard LTL satisfiability checking. We describe here a novel direct approach to LTLf satisfiability checking, where we take advantage of the difference in the semantics between LTL and LTLf. While LTL satisfiability checking requires finding a \emph{fair cycle} in an appropriate transition system, here we need to search only for a finite trace. This enables us to introduce specialized heuristics, where we also exploit recent progress in Boolean SAT solving. We have implemented our approach in a prototype tool and experiments show that our approach outperforms existing approaches.

#### Link and Location Based Routing Mechanism for Energy Efficiency in Wireless Sensor Networks. (arXiv:1403.1655v1 [cs.NI])

In Wireless Sensor Networks, sensed data are reported to the sink by the available nodes in the communication range. The sensed data should be reported to the sink with the frequency expected by the sink. In order to have a communication between source and sink, Link based routing is used. Link based routing aims to achieve an energy efficient and reliable routing path. This mechanism considers the status (current energy level in terms of Joules) of each node, link condition (number of transmissions that the Cluster Head (CH)and the Gateway (GW) candidates conducts) and the transmit power (power required for transmission in terms of Joules). A metric called Predicted Transmission Count (PTX) for each node is calculated using its status, link condition and transmit power. The node which has highest PTX will have the highest priority and it will be the potential candidate to act as CH or GW. Thus the selection of proper CH or GW reduces the energy consumption, and the network lifetime is increased.

#### Optimal Energy-Aware Epidemic Routing in DTNs. (arXiv:1403.1642v1 [cs.SY])

In this work, we investigate the use of epidemic routing in energy constrained Delay Tolerant Networks (DTNs). In epidemic routing, messages are relayed by intermediate nodes at contact opportunities, i.e., when pairs of nodes come within the transmission range of each other. Each node needs to decide whether to forward its message upon contact with a new node based on its own residual energy level and the age of that message. We mathematically characterize the fundamental trade-off between energy conservation and a measure of Quality of Service as a dynamic energy-dependent optimal control problem. We prove that in the mean-field regime, the optimal dynamic forwarding decisions follow simple threshold-based structures in which the forwarding threshold for each node depends on its current remaining energy. We then characterize the nature of this dependence. Our simulations reveal that the optimal dynamic policy significantly outperforms heuristics.

#### Optimal Patching in Clustered Malware Epidemics. (arXiv:1403.1639v1 [cs.CR])

Studies on the propagation of malware in mobile networks have revealed that the spread of malware can be highly inhomogeneous. Platform diversity, contact list utilization by the malware, clustering in the network structure, etc. can also lead to differing spreading rates. In this paper, a general formal framework is proposed for leveraging such heterogeneity to derive optimal patching policies that attain the minimum aggregate cost due to the spread of malware and the surcharge of patching. Using Pontryagin's Maximum Principle for a stratified epidemic model, it is analytically proven that in the mean-field deterministic regime, optimal patch disseminations are simple single-threshold policies. Through numerical simulations, the behavior of optimal patching policies is investigated in sample topologies and their advantages are demonstrated.

#### Unsupervised Anomaly-based Malware Detection using Hardware Features. (arXiv:1403.1631v1 [cs.CR])

Recent works have shown promise in using microarchitectural execution patterns to detect malware programs. These detectors belong to a class of detectors known as signature-based detectors as they catch malware by comparing a program's execution pattern (signature) to execution patterns of known malware programs. In this work, we propose a new class of detectors - anomaly-based hardware malware detectors - that do not require signatures for malware detection, and thus can catch a wider range of malware including potentially novel ones. We use unsupervised machine learning to build profiles of normal program execution based on data from performance counters, and use these profiles to detect significant deviations in program behavior that occur as a result of malware exploitation. We show that real-world exploitation of popular programs such as IE and Adobe PDF Reader on a Windows/x86 platform can be detected with nearly perfect certainty. We also examine the limits and challenges in implementing this approach in face of a sophisticated adversary attempting to evade anomaly-based detection. The proposed detector is complementary to previously proposed signature-based detectors and can be used together to improve security.

#### Disimplicial arcs, transitive vertices, and disimplicial eliminations. (arXiv:1403.1628v1 [cs.DM])

In this article we deal with the problems of finding the disimplicial arcs of a digraph and recognizing some interesting graph classes defined by their existence. A diclique of a digraph is a pair $V \to W$ of sets of vertices such that $v \to w$ is an arc for every $v \in V$ and $w \in W$. An arc $v \to w$ is disimplicial when $N^-(w) \to N^+(v)$ is a diclique. We show that the problem of finding the disimplicial arcs is equivalent, in terms of time and space complexity, to that of locating the transitive vertices. As a result, an efficient algorithm to find the bisimplicial edges of bipartite graphs is obtained. Then, we develop simple algorithms to build disimplicial elimination schemes, which can be used to generate bisimplicial elimination schemes for bipartite graphs. Finally, we study two classes related to perfect disimplicial elimination digraphs, namely weakly diclique irreducible digraphs and diclique irreducible digraphs. The former class is associated to finite posets, while the latter corresponds to dedekind complete finite posets.

#### On the Complexity of Barrier Resilience for Fat Regions. (arXiv:1302.4707v3 [cs.CC] UPDATED)

In the \emph {barrier resilience} problem (introduced Kumar {\em et al.}, Wireless Networks 2007), we are given a collection of regions of the plane, acting as obstacles, and we would like to remove the minimum number of regions so that two fixed points can be connected without crossing any region. In this paper, we show that the problem is NP-hard when the regions are fat (even when they are axis-aligned rectangles of aspect ratio $1 : (1 + \varepsilon)$). We also show that the problem is fixed-parameter tractable for such regions. Using this algorithm, we show that if the regions are $\beta$-fat and their arrangement has bounded ply $\Delta$, there is a $(1+\varepsilon)$-approximation that runs in $O(2^{f(\Delta, \varepsilon,\beta)}n^7)$ time, for some polylogarithmic function $f\in O(\frac{\Delta^2\beta^6}{\varepsilon^4}\log(\beta\Delta/\varepsilon))$.

#### Social Influence as a Voting System: a Complexity Analysis of Parameters and Properties. (arXiv:1208.3751v3 [cs.GT] UPDATED)

We consider a simple and altruistic multiagent system in which the agents are eager to perform a collective task but where their real engagement depends on the willingness to perform the task of other influential agents. We model this scenario by an influence game, a cooperative simple game in which a team (or coalition) of players succeeds if it is able to convince enough agents to participate in the task (to vote in favor of a decision). We take the linear threshold model as the influence model. We show first the expressiveness of influence games showing that they capture the class of simple games. Then we characterize the computational complexity of various problems on influence games, including measures (length and width), values (Shapley-Shubik and Banzhaf) and properties (of teams and players). Finally, we analyze those problems for some particular extremal cases, with respect to the propagation of influence, showing tighter complexity characterizations.

#### Analysis of Agglomerative Clustering. (arXiv:1012.3697v4 [cs.DS] UPDATED)

The diameter $k$-clustering problem is the problem of partitioning a finite subset of $\mathbb{R}^d$ into $k$ subsets called clusters such that the maximum diameter of the clusters is minimized. One early clustering algorithm that computes a hierarchy of approximate solutions to this problem (for all values of $k$) is the agglomerative clustering algorithm with the complete linkage strategy. For decades, this algorithm has been widely used by practitioners. However, it is not well studied theoretically. In this paper, we analyze the agglomerative complete linkage clustering algorithm. Assuming that the dimension $d$ is a constant, we show that for any $k$ the solution computed by this algorithm is an $O(\log k)$-approximation to the diameter $k$-clustering problem. Our analysis does not only hold for the Euclidean distance but for any metric that is based on a norm. Furthermore, we analyze the closely related $k$-center and discrete $k$-center problem. For the corresponding agglomerative algorithms, we deduce an approximation factor of $O(\log k)$ as well.

### StackOverflow

#### Functional programming style vs performance in Ruby [on hold]

I love functional programming and I love Ruby as well. If I can code an algorithm in a functional style rather than in a imperative style, I do it. I tend to do not update or reuse variables as much as possible, avoid using "bang!" methods and use "map", "reduce", and similar functions instead of "each" or danger loops, etc. Basically I try to follow the rules of this article.

The problem is that usually the functional solution is much slower that the imperative one. In this article there are clear and scary examples about that, being until 15-20 times slower in some cases. After reading it and doing some benchmarks I am afraid of keep using the functional style, at least in Ruby.

By the other hand I feel more comfortable writing code in functional style because it is smart and clean, it tends to less bugs, and I think is more "correct", specially nowadays that we can use concurrency and parallelism for better performance.

So I am very confused about which style to use in Ruby. Any wise recommendation will be appreciated.

### CompsciOverflow

#### Explanation of Heavy light decomposition

Can anyone explain heavy light decomposition of trees or give a resource to read it from? I have already gone through http://ipsc.ksp.sk/2009/real/solutions/l.html which is the best i could find but still could not understand its working completely.

### Planet Clojure

#### Setting Up Travis CI for OpenCV and Midje

I’ve been fiddling with OpenCV for a few weeks. Using OpenCV and Clojure is straightforward. The hardest part was setting up OpenCV.

When I’m experimenting and I get to a place where I feel like I know what I’m doing, I’ll start writing tests. Eventually you’ll setup continuous integration, but then you remember how difficult installing OpenCV was! If you’d like to use Travis CI with your Clojure project using OpenCV, I’ve already done the work for you! Swapping out midje isn’t a problem, if you’d rather use core.test or speclj, but I’m going to focus on midje here.

I’m assuming your project looks something like Magomimmo’s opencv-start. He’s also the guy that wrote the Clojure tutorial on OpenCV’s documentation site. If you setup your project using those instructions, that is fine since they are almost identical. The only difference is that I’m using a more more up to date version of OpenCV. You will have to make a few changes to the Travis CI yaml configuration below if you want to use a different version.

First, we need to add the midje and lein localrepo plugins to our project.clj if they aren’t there already. Travis CI needs to know about them. Adding the following line inside defproject within your project.clj should suffice:

clojure :plugins [[lein-localrepo "0.5.3"] [lein-midje "3.1.3"]] 

Next, we need to add the .travis.yml config to the root of the repo:

yaml language: clojure lein: lein2 script: lein2 midje jdk: – oraclejdk7

compiler: – gcc

before_install: – sudo apt-get update

install: – sudo apt-get install python-dev python-numpy

before_script: – git clone https://github.com/Itseez/opencv.git – cd opencv – git checkout 2.4 – mkdir build – cd build – cmake .. – make -j8 – sudo make -j8 install – cd ../.. – mkdir -p native/linux/x86_64 – cp opencv/build/lib/libopencv_java248.so native/linux/x86_64/libopencv_java248.so – cp opencv/build/bin/opencv-248.jar . – jar -cMf opencv-native-248.jar native – lein2 localrepo install opencv-248.jar opencv/opencv 2.4.8 – lein2 localrepo install opencv-native-248.jar opencv/opencv-native 2.4.8  With this configuration, we tell Travis CI that: our project is a Clojure project, that we are using Leiningen 2.0, midje for testing, and Oracle JDK 7. The lines after that are for building OpenCV.

The lines before before_script tell Travis CI that: we need to use the GCC compiler, install python dev dependencies, and numpy for OpenCV. The lines in before_script are the actual build process automation for OpenCV.

If you noticed that the before_script is similar to the build steps in Magomimmo’s tutorial, you would be right. The only change I made was to use OpenCV 2.4.8. If you’d like to use a different release, you should change the before_script to match your needs. On Travis CI, the build process takes about 8 minutes.

### StackOverflow

#### Stackoverflow when calling sol-count on 10 (N-queens program)

So this is my first time programming in clojure and I am running into some issues with stackoverflow with my program. This program basically tries to find all the possible solutions to the N-queens problem

When I call sol-count (finds number of solutions for a given N) on 10 or higher I get a stack overflow

(defn qextends?
"Returns true if a queen at rank extends partial-sol."
[partial-sol rank]
(if (>= (count partial-sol) 1)
(and
(not= (first partial-sol) (- rank (count partial-sol)))
(not= (first partial-sol) (+ rank (count partial-sol)))
(not= (first partial-sol) rank)
(qextends? (rest partial-sol) rank))
true)
)

(defn qextend-helper [n x partial-sol partial-sol-list]
(if (<= x n)
(if (qextends? partial-sol x)
(qextend-helper n (inc x) partial-sol (conj partial-sol-list (conj partial-sol x)))
(qextend-helper n (inc x) partial-sol partial-sol-list)
)
partial-sol-list)
)

(defn qextend
"Given a vector *partial-sol-list* of all partial solutions of length k,
returns a vector of all partial solutions of length k + 1.
"
[n partial-sol-list]
(if (>= (count partial-sol-list) 1)
(vec (concat (qextend-helper n 1 (first partial-sol-list) [])
(qextend n (rest partial-sol-list))))
nil))

(defn sol-count-helper
[n x partial-sol-list]
(if (<= x (- n 1))
(sol-count-helper n (+ 1 x) (qextend n partial-sol-list))
(qextend n partial-sol-list))
)

(defn sol-count
"Returns the total number of n-queens solutions on an n x n board."
[n]
(count (sol-count-helper n 1 [[]]))
)


### /r/compsci

#### [Help] Implementing Floorplanning Optimization

Hello, for a work I need to optimize a slicing Tree, I personally never heard of it before Thursday so I quickly turned to Google. Long story short, I tried an implementation that is unfeasible and I want to use the Wong-Liu Floorplanning algorithm which I got from this paper (p591).

Algorithm 10.4 Wong-Liu Floorplanning (P, e, r, k) 1. E = 12V3V4V ... nV; // initial solution 2. EBest = E; T = -Davg / lnP; 3. do 4. reject = 0; 5. for ite = 0 to k do 6. SelectOperation(Op); 7. Case Op 8. Op1: Select two adjacent operands ei and ej ; E0 = Swap(E, ei , ej ); 9. Op2: Select a nonzero length chain C; E0 = Complement(E, C); 10. Op3: done = FALSE 11. while not (done) do 12. Choice 1: Select two adjacent operand ei and operator eiþ1; 13. if (ei-1 6 = eiþ1) and (2Niþ1 < i) then done = TRUE; 14. Choice 2: Select two adjacent operator ei and operand eiþ1; 15. if (ei 6 != eiþ2 ) then done = TRUE; 16. end while 17. E0 = Swap(E, ei , eiþ1); 18. end case 19. DCost = cost(E0) – cost(E); 20. if (DCost <= 0) or (Random < e-DCost/T) then 21. E = E0 ; 22. if cost(E) < cost(EBest) then EBest = E; 23. else 24. reject = reject þ 1; 25. end if 26. end for 27. T = rT; // reduce temperature 28. until (reject/k > 0.95) or (T < ") or (OutOfTime) 

edit: view paper for best code viewing

The implementation doesn't look that hard, the code is pretty clear. However, I have no idea how to get the initial Delta avg.

It is written:

Before the simulated annealing process starts, we perturb the initial normalized Polish expression for a certain time to compute the average of all positive (uphill) cost change Delta avg.

Implementation wise, does it mean I have to run lines 1 to 19 of the algorithm for a certain time, calculate the DeltaCost to get Delta Avg?

If you know any other algorithm that would even more simple, I'll be happy to learn them! Thanks

submitted by NoProblemDude

#### Data mining and Interpolation methods.

Hi, a Professor of mine talked about interpolation methods and that they were very useful in Data Mining, for example, in detecting frauds, but he didn't explain how, and I haven't found much information about it! Could someone care to explain how interpolation is useful in data mining? Thank you.

submitted by twathf

### Planet Theory

#### Implementing and reasoning about hash-consed data structures in Coq

Authors: Thomas Braibant, Jacques-Henri Jourdan, David Monniaux
Abstract: We report on four different approaches to implementing hash-consing in Coq programs. The use cases include execution inside Coq, or execution of the extracted OCaml code. We explore the different trade-offs between faithful use of pristine extracted code, and code that is fine-tuned to make use of OCaml programming constructs not available in Coq. We discuss the possible consequences in terms of performances and guarantees. We use the running example of binary decision diagrams and then demonstrate the generality of our solutions by applying them to other examples of hash-consed data structures.

#### Massively parallel read mapping on GPUs with PEANUT

Authors: Johannes Köster, Sven Rahmann
Abstract: We present PEANUT (ParallEl AligNment UTility), a highly parallel GPU-based read mapper with several distinguishing features, including a novel q-gram index (called the q-group index) with small memory footprint built on-the-fly over the reads and the possibility to output both the best hits or all hits of a read. Designing the algorithm particularly for the GPU architecture, we were able to reach maximum core occupancy for several key steps. Our benchmarks show that PEANUT outperforms other state-of- the-art mappers in terms of speed and sensitivity. The software is available at this http URL

### /r/compsci

#### Was a Guide ever made on how to get training and certification for free or near free?

Question, is anyone aware of; a guide or compiling of information, that lists off various free but respected sources to get certification in the field of CSCI subjects?

submitted by silverwolfer

### TheoryOverflow

#### Packing problems with repetitions

In packing problems, we need to select a set of sets of items, such that no item is chosen twice (in $Set-Packing$, the actual items must not be packed twice, in $Graph-Packing$ the copies of the graph has to be vertex disjoint, in multidimensional matching every item has to appear once, etc.).

Are there any studied problems that ask for packing such that every item is packed at most $p$ times?

For example, is there any known reference for (perhaps under a different name?)-

$repetition-Set-Packing$:

given a universe $\mathcal{U}=\{e_1,..,e_n\}$, two numbers $k,r\in \mathbb{N}$ and a set of subsets $\mathcal{S}=\{s_1,s_2,...,s_m\}\subseteq 2^\mathcal{U}$, is there a set $\mathcal{S'} \subseteq \mathcal{S}$, $|\mathcal{S'}|=k$ such that every item in $\mathcal{U}$ appears in at most $r$ sets in $S'$?

Are there any other works on packing with limitied repetitions?

### StackOverflow

#### "Side-effecting lexical closure" vs function in Scala

In his answer's comment section, Apocalisp states the following:

Well, you did ask for a function. A side-effenting [sic] lexical closure is emphatically not a function.

What exactly does he mean by "side-effecting lexical closure", and how is that different from a function?

My guess is that they're trying to differentiate functions in a functional programming sense - where no side effects are allowed (such as changing state of variables or outputting values), from mere procedures, which do have side effects.

If that is the case, then does Scala make this distinction by itself, or is it merely left to the programmer? If so, is every callable (for the lack of a better term) that doesn't have side effects a function, and every callable that does a side effecting lexical closure?

#### Specifying Unit Test Location in Gradle

I've been getting my feet wet with Gradle for a school project (not an assignment) and I have this project that is divided into two folder, src/main/scala and src/test/scala.

As you can probably tell, the test folder stores my Unit Tests but for some reason I can't get Gradle to find them and tun them as it should. I'm using Scala with ScalaTest for this project.

Is there any way to tell Gradle where to look for test files? Or is there any logical explanation to why it isn't detecting my files?

This is my build.gradle:

apply plugin: 'java'
apply plugin: 'eclipse'
apply plugin: 'scala'

repositories {
mavenCentral()
}

sourceSets {
main {
scala {
srcDirs = ['src/scala']
}
}
test {
scala {
srcDirs = ['test/scala']
}
}
}

dependencies {
compile 'org.scala-lang:scala-library:2.10.3'
testCompile 'junit:junit:4.10'
testCompile 'org.scalatest:scalatest_2.10:2.1.+'
}

test {
useJUnit()
testLogging {
// Show that tests are run in the command-line output
events 'started', 'passed'
}
}


### HN Daily

#### Daily Hacker News for 2014-03-09

The 10 highest-rated articles on Hacker News on March 09, 2014 which have not appeared on any previous Hacker News Daily are:

## March 09, 2014

### StackOverflow

#### Can a var be a key in a map?

(def nextStudentNumber 1000)

(defn s
[lastName firstName]
(let [student {:nextStudentNumber {:lastName lastName
:firstName firstName
:id (inc nextStudentNumber)}}]


In this instance, I have created the var nextStudentNumber and I want to have map that keys that change on the of the student.

### CompsciOverflow

#### Shamos-Hoey Line segment intersection runtime

In the Shamos-Hoey algorithm for finding whether or not any two of $n$ line segments intersect, which is available at this site: http://geomalgorithms.com/a09-_intersect-3.html, there is use of "nearest line above" and "nearest line below". The algorithm is supposed to run in time $O(n\log n)$. Here is their pseudocode:

Initialize event queue EQ = all segment endpoints;
Sort EQ by increasing x and y;
Initialize sweep line SL to be empty;

While (EQ is nonempty) {
Let E = the next event from EQ;
If (E is a left endpoint) {
Let segE = E's segment;
Let segA = the segment Above segE in SL;
Let segB = the segment Below segE in SL;
If (I = Intersect( segE with segA) exists)
return TRUE;   // an Intersect Exists
If (I = Intersect( segE with segB) exists)
return TRUE;   // an Intersect Exists
}
Else {  // E is a right endpoint
Let segE = E's segment;
Let segA = the segment above segE in SL;
Let segB = the segment below segE in SL;
Delete segE from SL;
If (I = Intersect( segA with segB) exists)
return TRUE;   // an Intersect Exists
}
remove E from EQ;
}
return FALSE;      // No  Intersections


If one studies the C++ code provided at the bottom of the webpage, we see that this is simply a "next" and "previous" in a BST, however, I can't seem to tell which information is being used as the BST key.

My issue is the following: if we are considering all $y$-values at the current $x$-value of the sweep line, this is not merely a check for next or previous endpoint value in a BST, and can not take $O(\log n)$ time. However, if we are checking per endpoint coordinate, this could not possibly be correct, since the following situation would lead to an incorrect execution:

The algorithm should find that $B$ and $C$ intersection on insertion of $B$ into the search tree ("SweepLine" / "SL") while sweeping. However, if we are ordering by endpoint coordinates, $A$ is $B$'s previous, $C$ is not, and this would run into problems.

#### How does binary addition work?

I find binary confusing. I have watched minecraft redstone videos on binary adders, real binary adders, diagrams, etc and yet I have not learned much at all. How does electrons flowing through wires made of gold "add/subtract" to make numbers through some logic gates?!

### StackOverflow

#### Why in Scala function defined without empty parentheses doesn't behave like function?

Consider the following 2 objects

object TestObj1 {
def testMethod = "Some text"
}

object TestObj2 {
def testMethod() = "Some text"
}


and if I call those methods directly, they do what I expect

scala> TestObj1.testMethod
res1: String = Some text

scala> TestObj2.testMethod
res2: String = Some text


But now if we define following function

def functionTakingFunction(callback: () => String) {
println("Call returns: " + callback())
}


and try to call it, the method defined without () is not accepted.

scala> functionTakingFunction(TestObj1.testMethod)
<console>:10: error: type mismatch;
found   : String
required: () => String
functionTakingFunction(TestObj1.testMethod)
^

scala> functionTakingFunction(TestObj2.testMethod)
Call returns: Some text


I also noticed that you can't call the TestObj1.testMethod using parentheses, since it already is a String. But what is causing this behavior?

### TheoryOverflow

#### Greedy MAX SAT approximation ratio

Consider a naive MAX SAT approximation algorithm:

1. pick a literal $l$ which appears in maximum number of clauses
2. set the corresponding variable of $l$, such that all clauses containing $l$ are satisfied
3. repeat on the reduced formula until no variables remain

What is the approximation factor of the algorithm?

It's easy to show by induction, that at least half of all clauses will end up sattisfied. But I can't find a tight example with only 1/2 clauses sattisfied and all clauses satisfiable. I expect, that the approximation ratio is better than 1/2, but I can neither prove it nor disprove it.

Thank you very much.

### Planet Clojure

This Wednesday we’re performing algorithmic visuals for IJAD Dance‘s showing of In-Finite Space as part of the AHRC Creative Economy showcase. Twitter messages are geometrically formatted in 3D and cast into a retro-aesthetic graphical fly-through sequence to be interpreted by the dancers. Technology: Field, hybrid Python/Clojure mix.

### TheoryOverflow

#### Distributive expansion of CNF and implicants

I am looking for references for the following theorems.

Theorem 1: Distributive expansion of a CNF formula $P_c$ (product of sums) results in a DNF formula (sum of products) consisting of all prime implicants for $P_c$. $P_c$ is called a Blake canonical form BCF.

Theorem 2: If we transform a disjunctive clause $C_d$ with $k$ literals to a disjunctive clause of conjunctions $C_m$ by replacing each literal $l_i, i = (1, ..., k)$ with the conjunction $(\neg l_1 \wedge ... \wedge \neg l_{i-1} \wedge l_i)$, then $C_m$ will be logically equivalent to $C_d$. The clause $C_m$ is called a clause with maximized conflicts.

Distributive expansion of $C_m$ shows that it is logically equivalent to the unmaximized clause $C_d$, $C_m = C_d$.

Theorem 3: Distributive expansion of a CNF formula with maximized conflicts $P_m$ (product of sums of products), with simplification of outermost clauses before innermost clauses, results in a DNF formula $P_u$ (sum of products) defining a set of (not necessarily prime) implicants $I_u$ for $P_m$. The implicants from $I_u$ cover all possible solutions for $P_m$. The implicants $I_u$ are unique, in that no implicant $M_i \in I_u$ covers the solutions of any other implicant $M_j \in I_u, i \ne j$.

Motivation:

With the discussion in [BROWN] I have finally found the last piece of the puzzle.

Obviously, the restrictions for syllogistic formulae --- namely removal of duplicate literals and elimination of clauses with contradictory literals --- have been loosened over time. However, this seems to have been a process of ad hoc reasoning.

I am missing an exhaustive formal discussion of the consequences. I.e., the set of logically equivalent DNFs represented by their minimal form of a disjunction of "atomic" literals and the influence on the resulting DNFs of partial assignments, which always cover all possible solutions, albeit in different ways.

In order to find out, whether I have to do all of that on my own, I use theorem 3 to show, that besides the well-known Blake canonical form BCF, other non-trivial DNFs of implicants with different properties appear, when the input clauses are generalized from CNF to allow non-syllogistic representations, namely the DNFs from theorem 2. Since that also requires a different order of evaluation which is highly non-intuitive to simplification junkies, I assume that I am out of luck.

It can especially be shown that for selection problems in "direct encoding" (Chapter 2, Handbook of Satisfiability), the input based on theorem 2 alone allows a CDCL solver to solve the problem with significantly less decisions than with plain CNF. Small problems (currently 40 variables, 171 clauses) are even solved earlier with the generally less effective "direct encoding" than with the original CNF encoding. Note, that there is actually never a set of implicants generated in that case.

I have prepared a PDF with examples to illustrate the effects.

### StackOverflow

#### How do you use play scala specs2 matchers across multiple files

I am using Play 2.2.1 with scala and trying to test with Specs2 matchers across multiple files. Everything works fine in one very large ApplicationSpec.scala file but I would like to move the code to separate files.

The following code is what I am using to test across multiple files but it is very intermittent.

ApplicationSpec.scala file

import org.specs2.mutable._
import org.specs2.mutable.Specification
import org.specs2.matcher.JsonMatchers
import org.specs2.runner._
import org.junit.runner._

@RunWith(classOf[JUnitRunner])
class ApplicationSpec extends PlaySpecification with JsonMatchers {
"Test using another file" should {

testing

"End of test" in {"End" must beEqualTo("End")}
}


This function is located inside the ApplicationSpec.scala file

def testing() {

"Multiple files" should {

"Testing testFile1" in {

testFile1.test1
testFile1.test2

"Test1 and Test2 should print before this line" in { 1 must beEqualTo(1)}

}

"Testing testFile2" in {

testFile2.test3
testFile2.test4

"Test3 and Test4 should print before this line" in { 1 must beEqualTo(1)}

}
}
}


testFile1.scala

object testFile1 extends ApplicationSpec {

def test1 {
"testFile1 - test1" in {1 must beEqualTo(1)}
}

def test2 {
"testFile1 - test2" in {1 must beEqualTo(1)}
}

}


testFile2.scala

object testFile2 extends ApplicationSpec {

def test3 {
"testFile2 - test3" in {1 must beEqualTo(1)}
}

def test4 {
"testFile2 - tes4" in {1 must beEqualTo(1)}
}

}


Test results Each time "play test" is run test1, test2, and test3, test4 may or may not print out. Sometimes all four tests show up or none of the tests are printed.

+ test WS logic
[info]
[info]   Test using another file should
[info]
[info]     Multiple files should
[info]
[info]       Testing testFile1
[info]       + Test1 and Test2 should print before this line
[info]
[info]       Testing testFile2
[info]       + testFile2 - test3
[info]       + testFile2 - tes4
[info]       + Test3 and Test4 should print before this line
[info]   + End of test
[info]
[info] Total for specification testFile2
[info] Finished in 1 second, 713 ms
[info] 6 examples, 0 failure, 0 error
[info] testFile1
[info]
[info] Application should
[info] + test WS logic
[info]
[info]   Test using another file should
[info]
[info]     Multiple files should
[info]
[info]       Testing testFile1
[info]       + testFile1 - test1
[info]       + testFile1 - test2
[info]       + Test1 and Test2 should print before this line
[info]
[info]       Testing testFile2
[info]       + Test3 and Test4 should print before this line
[info]   + End of test
[info]
[info] Total for specification testFile1
[info] Finished in 111 ms
[info] 6 examples, 0 failure, 0 error
[info] ApplicationSpec
[info]
[info] Application should
[info] + test WS logic
[info]
[info]   Test using another file should
[info]
[info]     Multiple files should
[info]
[info]       Testing testFile1
[info]       + Test1 and Test2 should print before this line
[info]
[info]       Testing testFile2
[info]       + Test3 and Test4 should print before this line
[info]   + End of test
[info]
[info] Total for specification ApplicationSpec
[info] Finished in 99 ms
[info] 4 examples, 0 failure, 0 error


#### Range class : java interoperability

Why does not this work? In a Java code:

import scala.collections.immutable.Range;

// ...

Range r = Range.apply(0, 10)


Eclipse says:

The method apply(int) in the type Range is not applicable for the arguments (int, int)

And SBT says:

error: method apply in class Range cannot be applied to given types;

However, there is an apply(Int, Int) method in the collections.immutable.Range object of the Scala API.

#### Scala Future with filter in for comprehension

In the example below I get the exception java.util.NoSuchElementException: Future.filter predicate is not satisfied

I want to have the result Future( Test2 ) when the check if( i == 2 ) fails. How do I handle filter/if within a for comprehension that deals with composing futures?

Below is a simplified example that works in the Scala REPL.

Code:

import scala.concurrent.Future
import scala.util.{ Try, Success, Failure }
import scala.concurrent.ExecutionContext.Implicits.global

val f1 = Future( 1 )
val f2 = for {
i <- f1
if( i == 2 )
} yield "Test1"
f2.recover{ case _ => "Test2" }
f2.value


### QuantOverflow

#### How are HFT systems implemented on FPGA nowadays?

Vendors like Cisco claim they have achieved the same results with high performance NIC's (http://www.cisco.com/c/dam/en/us/products/collateral/switches/nexus-3000-series-switches/white_paper_c11-716030.pdf).

My question is, what part of HFT systems are mostly implemented on FPGAs nowadays? Are FPGAs still very popular? Is only the feed handler implemented on the FPGAs? Because some of these systems described above only have a feed handler implemented on the FPGA, because the strategy changes too much, or is too hard to implement on FPGAs. Others claim that they have also implemented trading strategies on FPGAs or using high performance NICs instead of FPGAs to build HFT systems. I've read about different approaches but I find it hard to compare as most of the results are tested on different input sets.

### /r/emacs

#### run shell cmd in vim and emacs using current .bash_profile?

how to run shell cmd in vim and emacs using current .bash_profile?

submitted by millar15

### CompsciOverflow

#### Computation - constructing a PDA that recognizes an intersection of two other languages

I am stuck on this problem from my textbook, this is what I have come up with but I don't know what the correct solution is.

### StackOverflow

#### "return this" in a covariant trait that return actual type

This was probably asked before, but I have this problem:

trait Container[+A] {
def a: A

def methodWithSideEffect() = {
// perform some side effecting work
this
}
}

class IntContainer(val a: Int) extends Container[Int]


How do I have the methodWithSideEffect in IntContainer return an IntContainer instead of a Container[Int]? I would also want not to add any parameter to the Container trait, at least from the API user point of view. Note that I did make a workaround with an implicit:

implicit class MyContainer[A <: Container[_]](c: A) {
def methodWithSideEffect(): A = {
// perform work
c
}
}


However, I am quite sure there is some way to do this more elegantly.

#### Scala: Custom compiler warning

I've created a trait with some abstract procedures (methods returning Unit). I've then created a sub trait that puts in dummy implementations {} for the convenience of development. However I want to put a compiler warning on the dummy trait: "Using the dummy trait all functionality may not be implemented. Use the base trait in production."

I've looked at the Scala annotations and the Java annotations and I can't find an appropriate annotation or way of doing this. I could just make use of a deprecated annotation, but that's rather inelegant:

@deprecated("GraphicMethodsDummy contains procedure stubs. Inherit from GraphicMethods for production", "")


Inheriting from deprecated doesn't seem to have much advantage as the method that produces the compiler message seems to be private and can't be overridden.

### UnixOverflow

#### Find FreeBSD ports that depend on another port

I have a headless FreeBSD server where some port has installed tons of X11-related packages. I would like to find out what these ports are so I can get rid of the unwanted X-related packages. Is there a way to figure this out?

### TheoryOverflow

#### A tool for minimal NFA computation

It is well known that minimizing an NFA for a fixed regular language is $PSPACE-Complete$.

As far as I know, there are no better than trivial algorithms for minimizing such NFA, but there's a little improvement if you consider symmetries.

I've a specific regular language I'd like to compute a minimal automaton for:

$$L_{k-distinct} :=\{w = \sigma_1\sigma_2...\sigma_k \mid \forall i\in[k]: \sigma_i\in\Sigma ~\text{ and }~ \forall j\ne i: \sigma_j\ne\sigma_i \}$$

But at the moment I can't seem to close the gap between the automaton I know to build for it and the lower bound I can prove for it.

I thought it might be fruitful to use some tool that given a language (it is finite for all $k,n$), searches (exhaustively if needed) for the smallest automaton which accept it, and see what the automaton looks like for small values of $k,n$.

Does anyone know a tool which builds a minimal automaton for a given language?

### Planet Theory

#### Simons Symposium 2014 — Discrete Analysis: Beyond the Boolean Cube

I’m pleased to announce that this week we’ll be reporting on the 2014 Simons Symposium — Discrete Analysis: Beyond the Boolean Cube. This is the second of three biannual symposia on Analysis of Boolean Functions, sponsored by the Simons Foundation. You may remember our reports on the 2012 edition which took place in Caneel Bay, US Virgin Islands. This year we’re lucky to be holding the symposium in Rio Grande, Puerto Rico.

I’m also happy to report that we will have guest blogging by symposium attendee Li-Yang Tan. This year’s talk lineup looks quite diverse, with topics ranging from the Bernoulli Conjecture Theorem to Fourier analysis on the symmetric group, to additive number theory. Stay tuned!

### StackOverflow

#### What is the difference between the reader monad and a partial function in Clojure?

Leonardo Borges has put together a fantastic presentation on Monads in Clojure. In it he describes the reader monad in Clojure using the following code:

;; Reader Monad

{:return (fn [a]
(fn [_] a))
:bind (fn [m k]
(fn [r]
((k (m r)) r)))})

(fn [env]
(f env)))

(defn connect-to-db []
(prn (format "Connected to db at %s" db-uri))))

(defn connect-to-api []
(prn (format "Connected to api with key %s" api-key))))

(defn run-app []
[_ (connect-to-db)
_ (connect-to-api)]
(prn "Done.")))

((run-app) {:db-uri "user:passwd@host/dbname" :api-key "AF167"})
;; "Connected to db at user:passwd@host/dbname"
;; "Connected to api with key AF167"
;; "Done."


The benefit of this is that you're reading values from the environment in a purely functional way.

But this approach looks very similar to the partial function in Clojure. Consider the following code:

user=> (def hundred-times (partial * 100))
#'user/hundred-times

user=> (hundred-times 5)
500

user=> (hundred-times 4 5 6)
12000


My question is: What is the difference between the reader monad and a partial function in Clojure?

#### Splitting a list of items into two lists of odd and even indexed items

I would like to make a function that accepts a list and returns two lists: the first contains every odd item, and the second contains every even item.

For example, given [1;2;4;6;7;9], I would like to return [ [1;4;7] ; [2;6;9] ].

I have written this so far and I do not know how to progress.

let splitList list =
let rec splitOdd oList list1 list2 =
match oList with
| [] -> []
and splitEven oList list1 list2 =
match oList with
| [] -> []
splitOdd list [] []


### TheoryOverflow

#### Are $PSPACE$-complete problems inherently less tractable than $NP$-complete problems?

Currently, solving either a $NP$-complete problem or a $PSPACE$-complete problem is infeasible in the general case for large inputs. However, both are solvable in exponential time and polynomial space.

Since we are unable to build nondeterministic or 'lucky' computers, does it make any difference to us if a problem is $NP$-complete or $PSPACE$-complete?

### StackOverflow

#### Play Framework & JSON: How to get an item of an array by value

Given the following JSON...

{
"firstName": "Joe",
"lastName": "Grey",
...
[
{
"name": "Default",
"street": "...",
...,
"isDefault": true
},
{
"name": "Home",
"street": "...",
...,
"isDefault": false
},
{
"name": "Office",
"street": "...",
...,
"isDefault": false
}
]
}


... how do I get let's say the item with name equal to Home?

    {
"name": "Home",
"street": "...",
...,
"isDefault": false
}


Thanks.

### QuantOverflow

#### Distribution of Geometric Brownian Motion

Please let me know where I have been mistaken!

Let the SDE satisfied by the GBM $S(t)$ be $$\frac{dS(t)}{S(t)} = \mu dt + \sigma dW(t).$$

Then, the underlying BM $X(t)$ will satisfy $$dX(t) = \left( \mu - \frac{1}{2} \sigma^2 \right)dt + \sigma dW(t).$$

To simulate the GBM for times $t_0 < t_1 < \ldots < t_n$, generate $n$ iid $\mathcal{N}(0,1)$ RVs, $Z_i$, $\quad i = 1,2,\ldots, n$ and set

$$S(t_{i+1}) = S(t_i)\exp\left(\left( \mu - \frac{1}{2} \sigma^2 \right)(t_{i+1} -t_i) + \sigma \sqrt{t_{i+1} - t_i} Z_{i+1} \right).$$ Then $$\frac{S(t)}{S(0)} = \exp\left(\underbrace{\left(\mu - \frac{1}{2} \sigma^2\right) t + \sigma \sqrt{t}Z}_{\mathcal{N}\left(\left(\mu - \frac{\sigma^2}{2}\right)t, \sigma^2 t\right)} \right)$$ and so $$\log \frac{S(t)}{S(0)} \sim \mathcal{N}\left(\left(\mu - \frac{\sigma^2}{2}\right)t, \sigma^2 t\right).$$

I think I'm missing something because when I use this mean and variance (in the equation directly above) for testing the normality of the log returns ($\log \frac{S(t)}{S(0)}$), I get ridiculous answers.

To be specific, with $S_0 = 20$, $\mu = 2$, $\sigma^2 = 1$ and partitioning $[0,1]$ into 100 subintervals, generating the GBM at these 100 points gives a range of values from 15.399 to 97.1384 for the $S(t_i)$. Then the log returns, $\log S(t_i)/S_0$, range from -0.26143 to 1.5804. The means I'm using for each $\log S(t_i)/S_0$ are (in increasing order of $i$) $0.015, 0.03, 0.045, \ldots, 1.5$ and the variances are $0.01,0.02, \ldots, 1$ since these are the parameters in the supposed normal distribution as described above. Finally, when these log returns are normalized using this mean and variance their values range from -2.7643 to 0.080404, which is clearly not $\mathcal{N}(0,1)$-distributed.

### StackOverflow

#### What's the formal functional name for recursive SelectMany?

Let's say I have a function with the following signature:

IEnumerable<TSource> SelectManyRecursive<TSource>(
this IEnumerable<TSource> enumerable,
Func<TSource, IEnumerable<TSource>> selector
);


What would be the name for that if it was implemented in a language such as Scala or Haskell (or even JavaScript since it shares some of common naming)?
(similar how Select = map, Where = filter etc)

If it would be a composition of other functions -- which ones?

### StackOverflow

#### Command line refractoring tool for Scala [on hold]

I am looking for command-line tool to perform simple refactoring on a Scala code.

The things I suppose to do:

• Renaming classes, objects, traits
• Renaming methods and fields
• Renaming arguments and variables

IDE libraries is OK, if they have a usable CLI interface.

### CompsciOverflow

#### Reduce Vertex cover to SAT

I need to reduce the vertex cover problem to a SAT problem, or rather tell whether a vertex cover of size k exists for a given graph, after solving with a SAT solver. I know how to reduce a 3-SAT problem to vertex cover problem, by constructing the subgraphs for each variable (x, !x) and for each clause (a triable). But I am not getting,how to do other way round?

I was thinking of first forming a DNF ,with electing k vertices at first and then convert it to a CNF, by enumerating all clauses. Is there any other method?

### TheoryOverflow

#### Intuition: Odd-cycle transversal in triangle-free graphs

Conjecture: If $G$ is a simple triangle-free graph, then there is a set of at most $n^2/25$ edges whose deletion destroys every odd cycle.

Bibliography

*[EFPS] P. Erdös, R. Faudree, J. Pach and J. Spencer, How to make a graph bipartite. J. Combin. Theory Ser. B 45 (1988), 86--98.

Question 1: Is this conjecture true by your intuition?

Question 2: What is the complexity to count the number of odd cycle? Any efficient algorithm to do that?

#### Convex 'Fair' Partitions Of Convex Polygons

It can be shown that any polygon (not necessarily convex) allows a fair partitioning into $n$ pieces for any $n$, provided the pieces need not be convex (this is not a convex fair partition).

Which can be found below:

This blog maintained by the authors has tentative thoughts, examples, etc on 'Fair Partitions': http://nandacumar.blogspot.com

Question: Given any positive integer $n$, can any convex polygon be partitioned into $n$ convex pieces so that all pieces have the same area and same perimeter?

### StackOverflow

#### Scala play controller function type

I am writing my tool for testing Play Framework controllers in DSL style, extending PlaySpecification, I need to pass controller method to the method of my framework class, but I am a bit rusty about types. I find in scaladoc that Action has (Request[A] => Result)

So I did:

def controllerHasStatusCode(ctlrFunc:Request[A] => Result, expectedHttpCode: Int) = {
val result = ctlrFunc(0)(FakeRequest())
status(result) must equalTo(expectedHttpCode)
}


But I have compilation error, not found: type A. Can anyone experienced in playframework help me, to pass controllers method as argument to my function?

Finally client would do something like controller signin mustHaveHttpCode OK, but question is not about it.

Note: probably the similar framework existing, but I would enjoy writing my own, step by step.

#### are multiple features a good use case for logback markers?

My use case is as following (pseudo code):

def addUser(user) {
MDC.put(user.id)
calcUser(user)
MDC.remove(user.id)
}

def calcUser(user) {
}

def calcUseName(user) {
storeUserInCache(user)
}

storeUserInCache(user) {
// is this a good use case? in case I want to enable CACHE feature in TRACE
// in logs (for the sake of example or any other feature to enable its tracing
// in logs i mark different TRACE with different markers.
LOG.trace(MyMarkers.CACHE, "storing user {} in cache", user);
}

getUserFromCache(userid) {
LOG.trace(MyMarkers.CACHE, "getting user {} from cache", userid)
}


now what i meant by the above is the ability to toggle TRACE on for userid by its MDC and i can also toggle on or off logs for different features. for example by using the CACHE marker I can have my application log all CACHE features in TRACE just because I want to see all caches in trace. is the CACHE marker a good use case for markers? as a toggle to see all CACHE feature in TRACE in my logs?

#### Using parametrized role in ansible - role configuration files

I have made an ansible playbook which rolls out a complete JBoss-EAP 6 application (so, software, configuration of datasources, databases, the works). All runs fine but I have constructions like

- { role: instance,
instance_type: standalone-ha,
port_offset: 0,
instance_name: test,
scanner_name: postnl,
lbl_group: testcluster,
java_memory_min: 256m,
java_memory_max: 256m,
java_memory_perm: 256m }


in my playbook. This works fine, but in this question I am looking to a cleaner way to define the variables (of course there is the groups_var directory, in which you can put these variables, but this isn't going to work when you need two or more instances.

So is there a way to clean this up, like so:

  - {role: instance, role_vars: /opt/test.cfg }


or something like this.

### CompsciOverflow

#### Axiom of choice and diagonalization [migrated]

I just found out that the axiom of choice is equivalent to saying that a vector space has a basis, such that any vector in the space can be written as a linear combination of a finite number of basis vectors.

If we assume the axiom of choice is true, then this seems to break proofs which use diagonalization.

For example, let $K_{2}$ be the binary field, and $B$ be the (countably infinite) basis over $\{0,1\}^{*}$ given by the axiom of choice.

Assume the following table enumerates the set of all sets of integers. The first column is $s_n$, the $n$th set in the enumeration, and the second column is a binary string describing $s_n$, such that $n \in s_n$ iff the $n$th bit of the associated string is a 1.
-------------
$s_0$ | 000...
$s_1$ | 100...
$s_2$ | 010...
$s_3$ | 110...
$s_4$ | 001...
...

We can iteratively use the anti-diagonal method to generate an uncountable number of strings not in our list, contradicting our assumption.

However, if the strings do not describe their associated sets directly, but rather describe them in terms of finite linear combinations of $B$, then diagonalization becomes meaningless, as it always produces infinite strings, and we have assumed that all vectors in $K_2^*$ are describable using a finite number of linear combinations.

In other words, $s_n = a_1b_1 + a_2b_2 + \ldots + a_kb_k \text{ for } a_1,\ldots, a_k \in K_2 \text{ and } b_1,\ldots,b_k \in B$ and $a_m = 1$ iff the $m$th symbol in the string associated with $s_n$ is $1$.

So, that would (seem to) imply that we can enumerate the real numbers, and turing machines can decide all sorts of silly stuff, etc. Am I misunderstanding, or is this really how AC works?

After thinking about this for awhile, I have come to the conclusion that if this is how AC works, then it is kind of like assuming that you have access to a countably infinite amount of information generated by a countably infinite amount of computation. Is that a good way of looking at it?

So what exactly are my questions?

• Is my understanding of AC correct?
• Is the above proof correct?

P.S. Sorry if my notation is goofy, or I left out any steps. Please let me know in the comments, so that my proof skills can improve :)

### StackOverflow

#### Update hierarchical / tree structure in Clojure

I have an Atom, like x:

(def x (atom {:name "A"
:id 1
:children [{:name "B"
:id 2
:children []}
{:name "C"
:id 3
:children [{:name "D" :id 4 :children []}]}]}))


and need to update a submap like for example:

if :id is 2 , change :name to "Z"


resulting in an updated Atom:

{:name "A"
:id 1
:children [{:name "Z"
:id 2
:children []}
{:name "C"
:id 3
:children [{:name "D" :id 4 :children []}]}]}


how can this be done?

### StackOverflow

#### Why does into separate keys from values when I add a map to a vector?

Suppose I have a vector v:

(def v [1 2 3])


When I add a vector to it using into it adds the items in order:

(into v [4 5])
;= [1 2 3 4 5]


If I add a map however it adds the value before the key:

(into v #{:some-key 2})
;= [1 2 3 2 :some-key]


If I add a map with 2 entries it gets worse:

(into v #{:some-key 2 :some-other-key 4})
;= [1 2 3 2 4 :some-other-key :some-key]


This seems a bit counter-intuitive for me. Is there a reason for this kind of behavior?

### CompsciOverflow

#### Regarding Turing Machine Halting Problem [on hold]

All problems solved by standard today's general purpose computer can be solved by standard Turing machine.As general purpose computer can't do more than Turing machine so The Turing machine halting problem must also be unsolved by today's general purpose computer.How can I realize the fact that halting problem can't be solved by todays general purpose computers.

### StackOverflow

#### Whats the difference between (concat [x] y) and (cons x y)?

I'm a complete noob trying to pick up clojure. I am working through the examples at 4clojure and am stuck at the pascal's trapezoid where you need to build a lazy sequence of the trapezoid's numbers.

(defn pascal [x]
(cons x
(lazy-seq
(pascal
(map +
(cons 0 x)
(conj x 0)
)))))


Which didn't work:

user=> (take 5 (pascal [1 1]))
([1 1] (1 2 1) (0 2 4 2) (0 0 4 8 4) (0 0 0 8 16 8))


Writing it this way works, however:

(defn pascal2 [x]
(cons x
(lazy-seq
(pascal2
(map +
(concat [0] x)
(concat x [0])
)))))

user=> (take 5 (pascal2 [1 1]))
([1 1] (1 2 1) (1 3 3 1) (1 4 6 4 1) (1 5 10 10 5 1))


So, what exactly am I doing wrong here? What is the difference between cons/conj and concat?

### QuantOverflow

#### Monte Carlo American Option Pricing under GARCH(1,1) volatitliy

I am attempting to price a couple of at-the-money American option using the LSM algorithm and GARCH(1,1) volatility. The LSM code I have works correctly for constant volatility, however, when I switch to the GARCH(1,1) model the option price is incorrect.

The option I am attempting to price is the example given in Ritchken and Trevor (1999) and it is also reproduced in (Pricing American options when the underlying asset follows GARCH processes by Lars Stentoft).

The variables given in both papers are:

The interest rate (r) is fixed at 10% (annualized using 365 days a year).

Stock price (S0) = 100

Strike price (K) = 100

Contract Time ($T$) = (2,10,50 and 100 days)

Option can be exercised daily ($dt$) = $\frac{1}{365}$

$\omega$ = 0.06575 (as we are working with returns in percentage terms) Note: $\omega$ = 6.575 x $10^{-6}$ in Ritchken and Trevor.

$\alpha$ = 0.04

$\beta$ = 0.90

c = 0

$\lambda$ = 0.

I am using Monte Carlo simulation to calculate the option price using the following (from Duan 1995):

$\sigma^2_{0} = \frac{\omega}{1 - \alpha - \beta}$

$\sigma_{t}^2 = \omega + \alpha(\varepsilon_{t-1})^2 + \beta \sigma^2_{t-1}$

Where: $\varepsilon_{t-1}$ ~ $N(0,\sigma_{t-1})$

and the evolution of the stock price is given by LRNVR condition:

$S_{t+1} = S_{t}*exp[(r - \frac{1}{2}\sigma_{t}^2) dt + (\varepsilon_{t}\sigma_{t}\sqrt{dt})]$

The answer for the LSM algorithm should be: (0.5589,1.1930,2.3984 and 3.1443) for (T = 2,10,50 and 100 days, respectively).

My answers, however, do not match those above. Is there anything incorrect in the stock price or variance evolution?

Many thanks,

Hob

### /r/compsci

#### What would you want from a college Computer Science Club?

I'm currently running a computer science club at my school. I wan to know what experiences you all had or wanted to have from a college computer science college so I can get the right idea of what people want from a computer science club.

Thank you!

submitted by jjrocks

### StackOverflow

#### Is there a good way in Scala to interpret the types of values in a CSV

Suppose I'm given a CSV with the following values:

0, 1.00, Hello
3, 2.13, World
.
.
.


Is there a good method or library that could automatically detect the best type to classify a given column as? In this case (Int, Float, String).

For more context, I'm attempting to extend a CSV parsing library to allow it to report histogram like data on the CSV that is passed in. The idea is to make it very easy to add certain validation tasks into this framework so as to figure out deficiencies or irregularities in a CSV data dump.

Intially I thought to write something which a user could supply a config file that specified the types, but for cases when the CSV column sets are very large, or just for ease of use, I'd like to attempt to automatically detect the types instead of having a user have to write them out.

### /r/clojure

#### How do I convert an OOP solution to a Clojure one?

To learn Clojure, I started "Coding for Interviews". So say I have this problem: http://codingforinterviews.com/archive/preview/20

The problem statement is: implement a queue using two stacks.

If I were doing JavaScript, I would have a Queue object with two private members (the two stacks). I would then have two public methods "enqueue" and "dequeue" which use the private stacks' pop and push methods.

In Functional Programming, though, I'm not so sure. My initial thought is this: have a "queue" module that holds two lists at the top-level scope and limit my use of these lists to (peek list) and (pop list). I would export "my-enqueue" and "my-dequeue", which are implemented using pop and push on the lists. Then, the user imports the "queue" lib and uses "my-enqueue" and "my-dequeue" on their own lists...

Is this what a typical Clojure solution to the problem would look like? So the encapsulation provided by classes in the OOP solution translates to modules in FP? I find it strange because there's no real "queue" or "stack" here (from the user's perspective), I just happen to be artifically restricting the use of operations on lists... Or would you implement in another way?

Thanks :)

submitted by pilgrim689

### StackOverflow

#### scala type tags and type aliases

If I have a type alias definition in a class, can I compare it during run time with a statically known type or other type alias? Consider:

type ConsArguments = (Option[Long], String, Option[String], Iterable[Input])
trait Type {
val name :String
type Value
def apply(id :Option[Long], name :String, label :Option[String], inputs :Iterable[Input]=Iterable()) :Statistic
}
class BaseType[V :TypeTag](val name :String, constructor :((ConsArguments)) => Statistic {type Value=V}) extends Type{
type Value = V
def apply(id :Option[Long], name :String, label :Option[String], inputs :Iterable[Input]=Iterable()) :Statistic{type Value=V} =
constructor((id, name, label, SortedSet[Input]()(Input.nameOrdering)++inputs))
}
val LongValued = new BaseType[Long]("long", (LongStatistic.apply _).tupled)
val lv :Type = LongValued
println("type of LongValued: "+universe.typeOf[LongValued.Value]+" is Long? "+(universe.typeOf[LongValued.Value]=:=universe.typeOf[Long]))
println("type of lv: "+universe.typeOf[lv.Value]+" is Long? "+(universe.typeOf[lv.Value]=:=universe.typeOf[Long]))


The first comparison is true, the second false. Can I somehow fix it? Generally, I'll have more instances of 'Type' serving as constructors for classes from my domain model and would like to iterate over a collection of those and choose a matching one.

### StackOverflow

#### Running unit tests in Scala project results in "error: package scala does not exist"

This error occurs whenever an import from scala is attempted. The Scala library is in the classpath, so I don't know what else could cause the problem. I am running Eclipse Kepler with the latest stable Scala library.

### CompsciOverflow

#### Euclidean Traveling Salesman

I am trying to find a way to solve Euclidean TSP in a polynomial time. I looked at some papers but I couldn't decide which one is better. What is the general approximation algorithm for solving this problem in polynomial time?

Thank you

#### Matlab scripts (converting strings to amout of lower case) [on hold]

I'm new to this site as well as to coding so the question might seem trivial but any help would be great (please don't just answer the problem if possible).

I'm trying to write a function where a string of letter is converted to the amount of lower case (loops and conditional are not allowed). My attempt so far is:

function countLowerCase
string = input('Please enter a string: ');
Lowercase = 'string' > 96 & 'string' <123;
sum(Lowercase)


My script does not work though.

#### Fibonacci pseudocode user input? [on hold]

Hello everyone I am trying to do a Fibonacci loop for a user input in the Sparc machine.

for example if I plug in 13 I get 1 1 2 3 5 8 as a sequence

so I did this

while sum   <userinput

sum=a+b
a=b

b=sum

printout(suM)

A=0 b=1 sum =0


this be my pseudo code

but It does not seem to work if I plug in 69 for example I get 1 2 3 5 8 13 21 34 55 89

an extra number I do not need

can anyone show me how to this correctly using a loop plz.

### Planet Clojure

#### Optimization with Loco

I had the pleasure of beta-testing Loco, which was announced today. Loco is a Clojure constraint solver library built on top of the Java library Choco. There are several constraint libraries available for Clojure, including core.logic, propaganda, and clocop, each with a slightly different focus.

The features that make Loco shine are the performance of the constraint engine, the fully declarative API, the ease with which one can build models, support for several interesting global constraints, and the ability to find an optimal solution for models with multiple solutions.

This is the first article of what I hope to be a series, detailing some of the interesting problems you can solve with Loco.

# Scheduling Buses with Loco

(use 'loco.core loco.constraints)

Loco is a powerful and expressive constraint solver, but it can also be used to solve certain kinds of integer linear programs.

One classic example is bus scheduling. Imagine that we are city transportation planners and we want to minimize the number of buses we need to operate in order to meet demand. We know the number of buses demanded for four-hour blocks of time.

(def demands  {:12am-4am  8   :4am-8am  10   :8am-12pm  7   :12pm-4pm 12   :4pm-8pm   4   :8pm-12am  4})

So for example, this map tells us that there is sufficient demand for 8 buses operating between 12am and 4am.

The interesting twist is that buses operate for 8 hours at a time. So, if we set a bus into operation at 12am, it operates an 8-hour shift from 12am-8am. So the question is, how many buses do we need to run from 12am-8am, and from 4am-12pm, etc. in order to meet the above demands.

We can represent this by a series of variables, each of which must be an integer from 0 through 12 (since 12 is the maximum overall demand).

So with loco, we can get the solution quite simply:

(solution  [($in :bus-12am-8am 0 12) ($in :bus-4am-12pm 0 12)   ($in :bus-8am-4pm 0 12) ($in :bus-12pm-8pm 0 12)   ($in :bus-4pm-12am 0 12) ($in :bus-8pm-4am 0 12)   ($>= ($+ :bus-8pm-4am :bus-12am-8am) (demands :12am-4am))   ($>= ($+ :bus-12am-8am :bus-4am-12pm) (demands :4am-8am))   ($>= ($+ :bus-4am-12pm :bus-8am-4pm) (demands :8am-12pm))   ($>= ($+ :bus-8am-4pm :bus-12pm-8pm) (demands :12pm-4pm))   ($>= ($+ :bus-12pm-8pm :bus-4pm-12am) (demands :4pm-8pm))   ($>= ($+ :bus-4pm-12am :bus-8pm-4am) (demands :8pm-12am))]  :minimize  ($+ :bus-12am-8am :bus-4am-12pm :bus-8am-4pm :bus-12pm-8pm :bus-4pm-12am :bus-8pm-4am)) which yields {:bus-8pm-4am 0, :bus-4pm-12am 4, :bus-12pm-8pm 7, :bus-8am-4pm 5, :bus-4am-12pm 2, :bus-12am-8am 8} Let’s see if we can generalize this to handle an arbitrary number of evenly-spaced time periods. Clearly, to do this we’ll need to get away from demands and variables that directly name the timespan. Instead, for our example in which we sliced the day into six 4-hour time periods, we can imagine indexing these blocks of time (0-based) as “Time period 0” through “Time period 5”. So we can just use a vector for our demands, for example: [8 10 7 12 4 4] means that 8 buses are demanded for time period “0” (corresponding to 12am-4am), 10 buses are demanded for time period “1” (corresponding to 4am-8am),… up to a demand of 5 buses for time period “5”. We’ll make use of Loco’s ability to treat vectors as subscripted variables. So [:buses 0] denotes $buses_0$, which is our variable for how many buses we schedule starting at the beginning of time period 0 (i.e., 12am). [:buses 1] (i.e., $buses_1$) is the variable for the number of buses starting at the beginning of time period 1, etc. We will also need an input, span which indicates how many consecutive time periods are spanned by the bus’s shift. In our example, span would be 2 (because a bus works for 2 of our 4-hour time periods). (defn minimize-buses "Takes a vector of the demands for any number of equally-spaced time slots. span is the number of time slots that a bus's operating time spans" [demands span] (let [time-slots (count demands), max-demand (apply max demands), declarations (for [i (range time-slots)] ($in [:buses i] 0 max-demand))        constraints        (for [i (range time-slots)]          ($>= (apply$+ (for [j (range (inc (- i span)) (inc i))]                        [:buses (mod j time-slots)]))            (demands i)))]    (solution      (concat declarations constraints)      :minimize (apply $+ (for [i (range time-slots)] [:buses i]))))) Let’s test it out on our original sample demand: => (minimize-buses [8 10 7 12 4 4] 2){[:buses 5] 0, [:buses 4] 4, [:buses 3] 7, [:buses 2] 5, [:buses 1] 2, [:buses 0] 8} Hmmm, it’s a little hard to read. We can fix that: => (into (sorted-map) (minimize-buses [8 10 7 12 4 4] 2)){[:buses 0] 8, [:buses 1] 2, [:buses 2] 5, [:buses 3] 7, [:buses 4] 4, [:buses 5] 0} Good, same answer as before. But now we can easily adjust to alternative demand schedules. For example, here’s a solution for a demand schedule based on 2-hour time periods, while buses still work 8-hour shifts: => (into (sorted-map) (minimize-buses [1 5 7 9 11 12 18 17 15 13 4 2] 4)){[:buses 0] 0, [:buses 1] 1, [:buses 2] 2, [:buses 3] 6, [:buses 4] 2, [:buses 5] 2, [:buses 6] 8, [:buses 7] 5, [:buses 8] 0, [:buses 9] 0, [:buses 10] 0, [:buses 11] 4} Now, let’s try a demand schedule with 1-hour time periods, with buses working 8-hour shifts: => (into (sorted-map) (minimize-buses [1 3 5 7 9 11 12 13 14 15 16 19 18 17 15 13 15 16 10 8 6 5 4 2] 8)) Uh oh, this seems to run forever. We can fix this with the timeout feature. In the definition of minimize-buses, we change the call to solution as follows: (solution (concat declarations constraints [constraint]) :minimize (apply$+ (for [i (range time-slots)] [:bus i]))      :timeout 1000)

The :timeout keyword specifies a number of milliseconds, after which the solver should return the best solution it has found so far:

=> (into (sorted-map)      (minimize-buses        [1 3 5 7 9 11 12 13 14 15 16 19         18 17 15 13 15 16 10 8 6 5 4 2]        8)){[:bus 0] 1, [:bus 1] 2, [:bus 2] 2, [:bus 3] 2, [:bus 4] 2, [:bus 5] 2, [:bus 6] 1, [:bus 7] 1, [:bus 8] 2, [:bus 9] 3, [:bus 10] 3, [:bus 11] 5, [:bus 12] 1, [:bus 13] 1, [:bus 14] 2, [:bus 15] 2, [:bus 16] 2, [:bus 17] 0, [:bus 18] 0, [:bus 19] 0, [:bus 20] 0, [:bus 21] 0, [:bus 22] 0, [:bus 23] 0}

Written with StackEdit.

### StackOverflow

#### Deep copy in Scala using Macros vs. Reflection

I've recently implemented deep copy and deep equals using Reflection. Now I am thinking about implementing these functionalities using Macros.

Is that possible? (If I have access to the source code of the classes on which I want to use these functionalities.)

My feeling is that using Macros would be better for two reasons:

1) more comipile time type safety

2) faster execution at runtime

### QuantOverflow

#### What does "percent of change" mean? [on hold]

Whenever a price is changed, you can find the percent of increase or the percent of decrease by using the following formula:

$$\frac{\text{percent of change}}{100}=\frac{\text{change in price}}{\text{original price}}$$

To find the change in price, you calculate the difference between the original price and the new price.

Is the "percent of change" the change in price represented as a percent of the original price?

Does the proportion:

"percent of change" is to $100$ as change in price is to original price

make sense? Also, don't we lose the percent symbol if the original price is $100? Since then $$\text{percent of change}=\text{change in price}$$ So does "percent of change" now just become a portion of the original price? ### Planet Clojure #### Appointment scheduling in Clojure with Loco Loco makes it easy to declaratively build constraint satisfaction models. In this blog post, we’ll look at a common use for constraint programming – appointment scheduling – and in so doing, we’ll see some of the ways that Loco goes beyond the features found in other Clojure constraint libraries. (use 'loco.core 'loco.constraints) # Scheduling appointments with no conflicts Imagine you have four people coming in for an interview, and you’ve set aside four timeslots in your day to conduct these interviews. You ask each person to list the timeslots when he/she can potentially come in. Let’s use 0-based indexing to refer to the people, and 1-based indexing to refer to the timeslots. Person 0 says she can come in at any of the four timeslots: 1, 2, 3, or 4. Person 1 says he can come in at timeslot 2 or 3. Person 2 says she can come in at timeslot 1 or 4. Person 3 says he can come in at timeslot 1 or 4. So the availability data looks like this: (def availability [[1 2 3 4] [2 3] [1 4] [1 4]]) Let the variable [:person 0] denote the timeslot when person 0 is scheduled to come in, [:person 1] when person 1 comes in, etc. (def person-vars (for [i (range (count availability))] [:person i])) We want to constrain each [:person i] variable to the available timeslots. (def availability-constraints (for [i (range (count availability))] ($in [:person i] (availability i))))

We want to ensure we don’t schedule two people in the same timeslot.

(def all-different-constraint  (apply $all-different? person-vars)) For convenience, let’s assemble the constraints into one big list (the order doesn’t matter in Loco): (def all-constraints (conj availability-constraints all-different-constraint)) Now we’re ready to solve. Let’s dump the solution into a sorted-map for easy readability. => (into (sorted-map) (solution all-constraints)){[:person 0] 3, [:person 1] 2, [:person 2] 4, [:person 3] 1} So there you have it. Once we’ve played around with this example interactively in the REPL, and are confident in the model, we can easily abstract this into a function that takes availability data, and returns the schedule: (defn schedule [availability] (->> (solution (conj (for [i (range (count availability))] ($in [:person i] (availability i)))        ($distinct (for [i (range (count availability))] [:person i])))) (into (sorted-map))))=> (schedule [[1 3 5] [2 4 5] [1 3 4] [2 3 4] [3 4 5]]){[:person 0] 5, [:person 1] 4, [:person 2] 1, [:person 3] 2, [:person 4] 3} I think the declarative Loco way of modeling constraints is concise and elegant, but this example could just as easily be done in, say, core.logic. So let’s push beyond, into an area that (as far as I know) can’t be done with core.logic. # Scheduling appointments minimizing conflicts The above scheduler is somewhat naive. => (schedule [[1 2 3 4] [1 4] [1 4] [1 4]]){} This doesn’t work because there’s no way to satisfy the constraint that no two people can be scheduled in the same timeslot. But let’s say, hypothetically, that if absolutely necessary, we can potentially squeeze two candidates into the same timeslot. We’d rather not, but we can do it if we have to. Can we build a model for this? Again, let’s start exploring the problem interactively with global defs and playing around with it at the REPL. Here’s the problematic availability example: (def availability [[1 2 3 4] [1 4] [1 4] [1 4]]) As before, we’ll want to constraint each person’s timeslot to his/her availability schedule: (def availability-constraints (for [i (range (count availability))] ($in [:person i] (availability i))))

Let’s define a few names for convenience. Let timeslots be a list of all the timeslot numbers.

(def timeslots (distinct (apply concat availability)))

Let person-vars be the list of all [:person i] variables.

(def person-vars  (for [i (range (count availability))] [:person i]))

Now for the interesting part. We want to allow up to 2 people in a given timeslot. So we’ll let the variable [:num-people-in-timeslot 1] be the number of people signed up for timeslot 1, and so on. Let people-in-timeslot-vars be the list of all such variables.

(def people-in-timeslot-vars  (for [i timeslots] [:num-people-in-timeslot i]))

Now, we create a list of constraints that state that each of these [:num-people-in-timeslot i] variables ranges between 0 and 2.

(def conflict-constraints  (for [i timeslots]    ($in [:num-people-in-timeslot i] 0 2))) To give these :num-people-in-timeslot variables the appropriate meaning, we need to bind each [:num-people-in-timeslot i] variable to the number of times i occurs among the variables [:person 1], [:person 2], etc. Loco’s $cardinality constraint allows us to do exactly that. For example,

($cardinality [:x :y :z] {1 :number-of-ones}) will bind :number-of-ones to the number of times 1 occurs among :x, :y, and :z. So, the following constraint will bind all the [:num-people-in-timeslot i] variables to their appropriate values. (def number-in-timeslots ($cardinality person-vars                (zipmap timeslots people-in-timeslot-vars)))

To minimize the number of conflicts, we need to count the number of conflicts.

Let the variable :number-of-conflicts stand for the number of timeslot conflicts we have. We need two constraints on :number-of-conflicts. The first constraint just sets up the finite domain that the variable could range over (i.e., 0 to the total number of timeslots). We need to do this because in Loco, every variable must be declared somewhere in the model. The second constraint binds :number-of-conflicts to the number of times 2 appears in the variables [:num-people-in-timeslot 1], [:num-people-in-timeslot 2], etc.

(def number-of-conflicts  [($in :number-of-conflicts 0 (count timeslots)) ($cardinality people-in-timeslot-vars {2 :number-of-conflicts})])

We built the constraints in parts; now building the model is simply a matter of concatting all the constraints together. (Note that number-in-timeslots is a single constraint, so we concatenate [number-in-timeslots] in with the other lists of constraints).

(def all-constraints (concat availability-constraints                             conflict-constraints                             [number-in-timeslots]                              number-of-conflicts))

Now, we’re all set up to solve the model.

=> (solution all-constraints             :minimize :number-of-conflicts){[:person 0] 2, [:person 1] 4, [:person 2] 4, [:person 3] 1, :number-of-conflicts 1, [:num-people-in-timeslot 1] 1, [:num-people-in-timeslot 2] 1, [:num-people-in-timeslot 3] 0, [:num-people-in-timeslot 4] 2}

In the final version, we really only want to see the [:person i] variables; Loco allows us to hide the other variables from the output by prepending an underscore character in front of the variable names.

So let’s abstract this into a more robust schedule-with-conflicts function.

(defn schedule-with-conflicts [availability]  (let [timeslots (distinct (apply concat availability)),        availability-constraints        (for [i (range (count availability))]          ($in [:person i] (availability i))), person-vars (for [i (range (count availability))] [:person i]), people-in-timeslot-vars (for [i timeslots] [:_num-people-in-timeslot i]), conflict-constraints (for [i timeslots] ($in [:_num-people-in-timeslot i] 0 2)),        number-in-timeslots        ($cardinality person-vars (zipmap timeslots people-in-timeslot-vars)), number-of-conflicts [($in :_number-of-conflicts 0 (count timeslots))          ($cardinality people-in-timeslot-vars {2 :_number-of-conflicts})] all-constraints (concat availability-constraints conflict-constraints [number-in-timeslots] number-of-conflicts)] (into (sorted-map) (solution all-constraints :minimize :_number-of-conflicts)))) Let’s give it a spin: => (schedule-with-conflicts [[1 2 3 4] [1 4] [1 4] [1 4]]){[:person 0] 2, [:person 1] 4, [:person 2] 4, [:person 3] 1} Written with StackEdit. ### DataTau #### "How do I become a data scientist?" #### IMDB Top 100K Movies Analysis in Depth Part 4 #### On Being a Data Scientist ### StackOverflow #### How to get nth element from varagrs sequence I'm having hard time trying to get nth element from the varargs sequnce in scala. Here's my code def foo(args: String*) = args.toArray(1)  I receive error like: error: type mismatch; found : Int(1) required: scala.reflect.ClassTag[?] def foo(args: String*) = args.toArray(1)  What's interesting, code like this works great: def foo(args: String*) = args.toArray.apply(1)  I'm pretty new to scala but I thought that it should be exactly the same. Is using apply right way to select nth element from vararg seq? ### /r/compsci #### prove f(n)!=O(n^2) how would I prove that f(n)!=O(n2 ) ? submitted by nobody_1 [link] [13 comments] ### CompsciOverflow #### web content "mining" using supervised learning tequniques We're accessing an API of a web system for obtaining product information. We require some additional information, which is not available through the API. This information is publically available for each product visually - through the source code for each item on its item page. We've written an algorithm which parses this information from the web page for each item, but that will be highly ineffective in the long run, since the algorithm will simply stop working if and when they decide to change the source code ( for example - redesign of the front end ). I feel like there should be a supservised learning approach to this problem, however I'm unaware if there exist such solutions. What are some good aproaches to this kind of problems ? Regards. ### StackOverflow #### Scala in depth - Existential types I am currently reading Scala in depth and I struggle with a point about existential types. with openjdk 7 and scala 2.10.3 The following instructions gives me a error : val x = new VariableStore[Int](12) val d = new Dependencies {} val t = x.observe(println) d.addHandle(t) <console>:14: error: method addHandle in trait Dependencies cannot be accessed in types.Dependencies Access to protected method addHandle not permitted because enclosing object$iw is not a subclass of
trait Dependencies in package types where target is defined
^


And I can't find out why and how I arrive to this error.

Edit 1 : I added the following code from Kihyo's answer :

class MyDependencies extends Dependencies {
}

val x = new VariableStore[Int](12)
val d = new MyDependencies
val t = x.observe(println)


Now I have the following error message :

type mismatch; found : x.Handle (which expands to) x.HandleClass required: d.Ref (which
expands to) x.Handle forSome { val x: sid.types.obs.Observable }


HandleClass is a Handle and Ref is a Handle of any Observer (if I get it right) so the value t should be accepted as a correct type for the exception.

#### Golang zmq binding, ZMQ4, returns package error not finding file zmq.h

I am trying to include ZMQ sockets in a Go app but both zmq4 and gozmq (the referred ZMQ binding libraries for Go) are giving me problems. I would like to understand why zmq4 specifically isn't importable on my system.

I am running a Windows 8 system and I used the windows installer from the ZMQ website for version 4.0.3. I am primarily concerned about getting zmq4 set up and here is the result of my "go get" query on the github library's location:

> go get github.com/pebbe/zmq4
# github.com/pebbe/zmq4
polling.go:4:17: fatal error: zmq.h: No such file or directory
compilation terminated.


This issue is not alleviated by cloning the Github repository - the error remains the same.

I know the issue has to do with the C library zmq.h that is located in the "include" folder of my ZMQ installation, but whether the dependency is held up by a pathing issue or an external tool issue is a mystery to me.

A similar error has come up in regards to node.js and is the solution I see others referred to, outside of node scripting, but it was unsuccessful in my case.

I've so far included the path to the "include" folder in my PATH environment variable and previously placed zmq.h inside of the zmq4 top-level folder. I don't have much of an arsenal otherwise to understand this problem because I am new to C and C-importing packages in Go

#### In Scala, what are the types Unit?

I have tried searching on the internet but there is very little information out there on these. Does anybody know any good reference/s?

Thanks, Tim

#### what is Null used for?

I have done abit of research on this and the only information I can get is that Scala.Null exists solely for backward compatibility with Java. Is this true?

#### Project compiles fine in IntelliJ, Tomcat says java.lang.NoClassDefFoundError: my/package/name/blah

My project compiles fine in IntelliJ, it is a simple spring-mvc written in scala.

I get this error when I run it using tomcat:

java.lang.NoClassDefFoundError: org/example/houses/SomeClassNameHere


The above isn't the exact name of my library.

My controller looks like:

package com.example.scalacms.web.controllers

import org.springframework.stereotype.Controller
import org.springframework.web.bind.annotation.{ResponseBody, RequestMapping}
import org.example.house

@Controller
class HomeController {
var houses: Houses = _

@RequestMapping(Array("/"))
@ResponseBody
def index: String = {
"hello, world!"
}

}


I'm confused because it compiles fine in IntelliJ, it picks up all my classes in intellisense etc.

Could it be that tomcat doesn't have the library in my classpath? I am using an exploded artifact.

I can't see the classpath anywhere in the output windows so I cannot confirm.

#### What is the best way to create/build Scala Project

I started learning Scala lately. I find it extremely difficult to -productively- add new project. I was using Visual Studio for years as I am coming from .net development environment.

However, SBT is great tool to build Scala project, but you have to make everything from scratch by your own. It's sometime non productive to make a project that respect the guide lines of Scala () add plugins.build,

I found many alternatives, but I am not sure if these projects are updated:

2- TypeSafe Templates: http://typesafe.com/activator/templates

3- Using giter8 github.com/n8han/conscript

IDEs lack some feature such as supporting Akka, in addition to that:
for Eclipse, in the project folder, type sbt eclipse, for Intellij IDEA, in the project folder, type sbt gen-idea

What is the best way to create and build Akka projects in an interactive way, respecting the guideline of the scala (You can see here the project's architecture www.scalatra.org/2.2/getting-started/project-structure.html)

### TheoryOverflow

#### Data Structures - Hash tables [on hold]

I have a question about an exercise I have to make for school. This is the question:

Consider inserting keys into a hash table of length m = 13 using open addressing with the auxiliary hash function h'(k) = k modulo m. Does quadratic probing with c1 = c2 = 1 result in a probe sequence that is a permutation of 0, 1, ..., 12?

Now I'm not sure how to come to the answer. I know that the formula for quadratic probing is the following: h(k, i) = (h'(k) + i + i^2) modulo 13 where you start with i = 0, and increase it until you find an empty spot in the array.

I would say the probe sequence totally depends on the input, but that is obviously not the answer since they don't give any input numbers.

Can anyone help me with this question? Thanks in advance, Jeroen D.

### StackOverflow

#### How to practicaly handle List[FieldError] in Lift with Squeryl

I'm new to both Lift and Squeryl.

I was following the example on the lift cookbook on how to create a Schema and some tables. I managed to do so and to insert records, this is my schema code:

object PortfolioSchema extends Schema {

val systemConfigurations = table[SystemConfiguration]

on(systemConfigurations) {
sc =>
declare(
sc.name defineAs (unique)
)
}

class SystemConfiguration extends Record[SystemConfiguration] with KeyedRecord[Long] {

override def meta: MetaRecord[SystemConfiguration] = SystemConfiguration

override val idField = new LongField(this)

val name = new StringField(this, 256) {
override def validations = valUnique("unique.name") _ :: super.validations

override def setFilter = trim _ :: super.setFilter
}

private def valUnique(errorMsg: => String)(name: String): List[FieldError] = SystemConfiguration.unique_?(name) match {
case false => FieldError(this.name, S ? errorMsg) :: Nil
case true => Nil
}

}

object SystemConfiguration extends SystemConfiguration with MetaRecord[SystemConfiguration] {

def unique_?(name: String) = from(systemConfigurations) {
p => where(lower(p.name) === lower(name)) select (p)
}.isEmpty

}

}


Long story short, I just have an entity with a field name that is unique and a validation function that checks this property and returns a FieldError that is defined form the Lift library like follows:

case class FieldError(field: FieldIdentifier, msg: NodeSeq) {
override def toString = field.uniqueFieldId + " : " + msg
}

object FieldError {
import scala.xml.Text
def apply(field: FieldIdentifier, msg: String) = new FieldError(field, Text(msg))
}


Basically what it does it attaches the field.uniqueFieldId to the error message I specify, so if I use it like this:

valUnique("unique.name") _


What I get in the List[FieldError] is Full(name_id) : unique.name

This doesn't look right to me because to get my error message I'll have to split the string and remember the field identifier, is there any better way to handle this error case?

### TheoryOverflow

#### Set packing with maximum coverage objective

We are given a universe $\mathcal{U}=\{e_1,..,e_n\}$ and a set of subsets $\mathcal{S}=\{s_1,s_2,...,s_m\}\subseteq 2^\mathcal{U}$.

Set-Packing asks how many disjoint sets we can pack, and is defined as follows:

Given a number $k\in[m]$, is there a set $\mathcal{S'} \subseteq \mathcal{S}$, $|\mathcal{S'}|=k$ such that all of the sets in $\mathcal{S'}$ are disjoint?

Maximum-Coverage, allows intersecting sets, but asks how much of the universe can we cover by $k$ sets:

Given numbers $k\in[m]$,$r\in[n]$ is there a set $\mathcal{S'} \subseteq \mathcal{S}$, $|\mathcal{S'}|=k$ such that $|\cup_{s\in\mathcal{S''}}s|\geq r$?

I'm interested in what seems to be a combination of the two, a disjoint cover, which aims at covering as much of $\mathcal{U}$ as possible.

Disjoint-Maximum-Coverage:

Is there a set $\mathcal{S'} \subseteq \mathcal{S}$ such that $|\cup_{s\in\mathcal{S'}}s|\geq k$ (i.e. it covers at least $k$ elements) and the sets in $\mathcal{S'}$ are disjoint?

What can we say about the approximation hardness of $DMC$? Is this problem known under a different name?

Related results:

Both Set-Packing and Maximum-Coverage are known to be $APX$-Hard (and even stronger than that - Unless $P=NP$, $SP$ can't be approximated within $ln(|S|)(1-o(1))$, and $MC$ has a tight bound using the greedy algorithm).

$MC$ is approximable within $1-\frac{1}{e} + o(1)$, while the best known bound for $SP$ is a $O(\sqrt S)$ approximation.

### CompsciOverflow

#### Proving a binary heap has $\lceil n/2 \rceil$ leaves

I'm trying to prove that a binary heap with $n$ nodes has exactly $\left\lceil \frac{n}{2} \right\rceil$ leaves, given that the heap is built in the following way:

Each new node is inserted via percolate up. This means that each new node must be created at the next available child. What I mean by this is that children are filled level-down, and left to right. For example, the following heap:

    0
/ \
1   2


would have to have been built in this order: 0, 1, 2. (The numbers are just indexes, they give no indication of the actual data held in that node.)

This has two important implications:

1. There can exist no node on level $k+1$ without level $k$ being completely filled

2. Because children are built left to right, there can be no "empty spaces" between the nodes on level $k+1$, or situations like the following:

    0
/ \
1   2
/ \   \
3  4    6


(This would be an illegal heap by my definition.) Thus, a good way to think of this heap is an array implementation of a heap, where there can't be any "jumps" in indeces of the array.

So, I was thinking induction would probably be a good way to do this... Perhaps something having to deal with even an odd cases for n. For example, some induction using the fact that even heaps built in this fashion must have an internal node with one child for an even n, and no such nodes for an odd n. Ideas?

### StackOverflow

#### Recieve and send email with Scala

I'm planning to build a service using Scala and Akka that is going to depend on e-mail heavily. In fact, most of the communication with my service will be done by sending letters to it and getting a replies. I guess this means I need a reliable email server and ways to communicate with it from Scala.

Question is, what are the best practices for doing this? Which email server should I choose and what Scala solutions are there to accomplish this task?

#### Kdevelop4 autocompletion of zeromq headers

I'm using kdevelop4 on a project that uses the ZeroMQ libraries. KDevelop4 will autocomplete the ZMQ functions alright (such as zmq_bind) but it will not autocomplete the preprocessor defines such as ZMQ_REQ. So when writing a line like this:

zmq_bind(sock, ZMQ_REP)

it will autocomplete on the zmq_bind but not on the ZMQ_REP. Also when explicitely requesting autocomplete, it will just show the auto word completion which usually won't help.

Does anyone know how to make this work? The autocomplete options menu in kdev4 does not really contain settings.

### Planet Theory

#### TR14-032 | Tableau vs. Sequent Calculi for Minimal Entailment | Olaf Beyersdorff, Leroy Chew

In this paper we compare two proof systems for minimal entailment: a tableau system OTAB and a sequent calculus MLK, both developed by Olivetti (J. Autom. Reasoning, 1992). Our main result shows that OTAB-proofs can be efficiently translated into MLK-proofs, i.e., MLK p-simulates OTAB. The simulation is technically very involved and answers an open question posed by Olivetti (1992) on the relation between the two calculi. We also show that the two systems are exponentially separated, i.e., there are formulas which have polynomial-size MLK-proofs, but require exponential-size OTAB-proofs.

### StackOverflow

#### replace mockito function between tests in same class in scala/mockito/play

I am running tests on my scala with play and mockito.
This is my code:

@RunWith(classOf[JUnitRunner])
class ProductServiceTests extends Specification
with ProductRepositoryComponent
with ProductServiceComponentImpl
with Mockito
{

val productRepository = mock[ProductRepository]
val productId = "d3d08285-512f-46a6-811f-1abeb94ebb98"
val product:Product = new Product(Option(productId), "default name", "default description", new References(Some("1"), Some("1"), Some("1"), Some("1")))
val language = "en_US"
val tenantId = ""

def mockStuff = {
product.id.get

productRepository.updateProduct(any[String], any[String], any[Product]) returns
product.id.get

//    {
//    if (language.equals("en_US"))
//      product.id.get
//    else
//      throw new Exception("Language must be english !! !!")
//    }
}

step(mockStuff)

"ProductService" should {
"add minimal product to product repository" in {
val result = productService.addProduct(language, tenantId, product)
result mustNotEqual null
result must beAnInstanceOf[DTOResponse[String]]
val resultAsStr = result.asInstanceOf[DTOResponse[String]].get
resultAsStr.length mustEqual 36 //Guid length
resultAsStr mustEqual productId
}

//How can i override addProduct - So that from now on, it will use the exception throwing add Product.. (commented one)
"update product in repository" in {
val result = productService.addProduct("he_IL", tenantId, product)
result mustNotEqual null
result must beAnInstanceOf[DTOResponse[String]]
val resultAsStr = result.asInstanceOf[DTOResponse[String]].get
resultAsStr.length mustEqual 36 //Guid length
resultAsStr mustEqual productId
}
}
}


I have 2 in inside the should.. How can i override addProduct method for the 2nd in ?

My problem is i want to simulate 2 addProduct functions, One will be ok, the other one will be invalid since the Id already exists..

Thanks!

### CompsciOverflow

#### Is Math a MUST in computer science specificly in Software Development for Web servers [on hold]

I know math is necessary for application development in simulation software and else.

But I'm learning Python and trying to go deep in PHP, and like to develop applications and extends web servers' capability.

I'm before a big dilemma that whether to learn math professionally or not. I have tried my best to avoid being general on my question.

I have asked this question here because I want to learn it from roots and deal with its scientific aspects.

### StackOverflow

#### Scala synchronized consumer producer

I want to implement something like the producer-consumer problem (with only one information transmitted at a time), but I want the producer to wait for someone to take his message before leaving.

Here is an example that doesn't block the producer but works otherwise.

class Channel[T]
{
private var _msg : Option[T] = None

def put(msg : T) : Unit =
{
this.synchronized
{
waitFor(_msg == None)

_msg = Some(msg)

notifyAll
}
}

def get() : T =
{
this.synchronized
{
waitFor(_msg != None)

val ret = _msg.get

_msg = None

notifyAll

return ret
}
}

private def waitFor(b : => Boolean) =
while(!b) wait
}


How can I changed it so the producers gets blocked (as the consumer is) ?

I tried to add another waitFor at the end of but sometimes my producer doesn't get released.

For instance, if I have put ; get || get ; put, most of the time it works, but sometimes, the first put is not terminated and the left thread never even runs the get method (I print something once the put call is terminated, and in this case, it never gets printed).

### StackOverflow

#### Mime Type of Partially Uploaded File

I am using Play Framework version 2.2 and I am trying to get the mime type of a partially uploaded file so I can do a direct upload to an Amazon S3 instance. What is the best practice for doing this?

I am currently using FlowJS but it doesn't look like they have anything in particular for dealing with mime types. Additionally, I plan on making mobile apps that will use the same API so it would be best if it was on the server side and not the client side.

The only solution I can think of is parsing the extension and mapping that to a mime type, but that sounds like a hacky way to do it.

#### Discrete Event Simulation without global Queue?

I am thinking about modelling a material flow network. There are processes which operate at a certain speed, buffers which can overflow or underflow and connections between these.

I don't see any problems modelling this in a classic DES fashion using a global event queue. I tried modelling the system without a queue, but failed in early stages. Still I do not understand understand the underlying reason, why a queue is needed, at least not for events which originate "inside" the network.

The idea of a queue-less DES is to treat the whole network as a function which takes a stream of events from the outside world and returns a stream of state changes. Every node in the network should only be affected by nodes which are directly connected to it. I have set some hopes on haskell's arrows and FRP in general, but I am still learning.

An event queue looks too "global" to me. If my network falls apart into two subnets with no connections between them and I only ask questions about the state changes of one subnet, the other subnet should not do any computations at all. I could use two event queues in that case. However, as soon as I connect the two subnets I would have to put all events into a single queue. I don't like the idea, that I need to know the topology of the network in order to set up my queue(s).

So

• is anybody aware of DES algorithms which do not need a global queue?
• is there a reason why this is difficult or even impossible?
• is FRP useful in the context of DES?

### CompsciOverflow

#### Disjoint Sets - Best Case Times

Suppose we have following implementations of Disjoint Sets:

1) A linked list with union-by-weight
2) A tree with union-by-rank and path compression

Let n be the number of elements and suppose we have a sequence of m operations (MAKE-SET, FIND-SET and UNION) where m > n.

The worst case times are respectively $O(m + lg(n))$ and $O(m\,lg^*(n))$ I'd like to know if there are "best case" running times and if so for what values of m and n?

### QuantOverflow

#### Compute the average efficient frontiers with estimated parameters from generated time series

My overall objective is to analyse the impact of error in mean-variance analysis from historical data. I am given the returns and standard deviation for the five assets under consideration, as well as the correlation matrix for the five assets. Using the functions mvnrnd to generate the monthly returns and frontcon for the efficient frontier. Three different time periods need to be analysed, 2, 30, and 150yrs. I have written the function below to try and calculate the average efficient frontiers, but it fails on the 150 yrs attempt with the message below. This is the my first time writing anything in MATLAB (which I need to use), so I am not 100% sure of my code. It does produce graphs for the 2 yr and 30 yr time period, but I don't know if the failure of the 150 yr is due to my bad programming or not. In particular I wasn't sure how to calculate the average covariance across the 10,000 simulations. Any help would be greatly appreciated. My code is below the error message.

> Warning: Candidate solution is infeasible due to a bad pivot.

> In lcprog>lcprealitycheck at 294
> In lcprog at 251 In qplcprog at 247
> In portopt at 249
> In frontcon at 231 In AverageEfficientFrontiers at 36
> Error using portopt (line 256)
>
> No portfolios satisfy all input constraints for maximum-return
> portfolio. Possibly unbounded problem.
>
> Error in frontcon (line 231)    [PRisk, PRoR, PWts] = portopt(ERet,
> ECov, NPts, RTarget,    ConSet, ...
>
> Error in AverageEfficientFrontiers (line 36) [Risk, Return, Weights] =
> frontcon(AverageReturn, AverageCovariance, 10);"

function [] = AverageEfficientFrontiers( Years, Simulations )

AssetReturns = [0.006,0.01,0.014,0.018,0.022];
AssetStDev = [0.085,0.08,0.095,0.09,0.1];
CorrelationMatrix = [1,0.3,0.3,0.3,0.3;
0.3,1,0.3,0.3,0.3;
0.3,0.3,1,0.3,0.3;
0.3,0.3,0.3,1,0.3;
0.3,0.3,0.3,0.3,1];
Months = Years*12;
CovarianceMatrix = corr2cov(AssetStDev,CorrelationMatrix);
% Preallocating avoids the need for MATLAB to copy the data from one array
% to another inside the loop
TotalCumulativeReturn = zeros(Simulations,5);
PeriodCovariance = zeros(Simulations,5,5);

for i=1:Simulations
MonthlyReturns = mvnrnd(AssetReturns,CovarianceMatrix,Months);
%   If A is a nonempty matrix, then prod(A) treats the columns of A as
%   vectors and returns a row vector of the products of each column.
%   A(i,:) is the ith row of A.
TotalCumulativeReturn(i,:) = prod(1+MonthlyReturns)-1;
%   For matrix input X, where each row is an observation, and each column
%   is a variable, cov(X) is the covariance matrix.
%   http://www.mathworks.co.uk/help/matlab/ref/cov.html
PeriodCovariance(i,:,:) = cov(MonthlyReturns)*Months;
end
% If A is a nonempty, nonvector matrix, then mean(A) treats the columns of
% A as vectors and returns a row vector whose elements are the mean of each
% column.
AverageReturn = mean(TotalCumulativeReturn);
AverageCovariance = mean(PeriodCovariance);
% http://www.mathworks.co.uk/help/matlab/ref/reshape.html
AverageCovariance = reshape(AverageCovariance, [5,5]);
[Risk, Return, Weights] = frontcon(AverageReturn, AverageCovariance, 10);

plot(Risk, Return);
end


### CompsciOverflow

#### Difference between BTSP and TSP

I am wondering what is the difference between Bottleneck Travelling Salesman Problem and normal Travelling Salesman Problem?

Thank you

### StackOverflow

#### Scala nested generic with wildcards assigment

I've classes:

• trait Tp
• case class T1 extends Tp
• case class T2 extends Tp
• class Vp
• class V1[+T <: Tp]
• class Dr[V <: Vp]

and method foo(x: List[Dr[V1[_ <: Tp]]]). I'm trying to call this method like this

val data = new List[Dr[V1[T1]]](new Dr[V1[T1]](...), new Dr[V1[T1]](...))
foo(data)


but get error:

type mismatch;
found   : List[Dr[V1[T1]]]
required: List[Dt[V1[_ <: Tp]]]
val x = foo(data)


How can I write method signature to accept any list of Dr[] where is any generic option of V1[_ <: Tp] ? Is it possible?

P.S. code is changed, but I hope there no typos

### Overcoming Bias

The management consulting firm Hay Group worked with the German futurists at Z-Punkt to identify six mega trends such as globalization, technology convergence and the individualization of careers that will shape the kind of leaders companies will need in the future. I spoke with Georg Vielmetter, Hay Group’s regional director of leadership and talent, about the newly released study “Leadership 2030” that he co-authored. …

I think that positional power and hierarchical power will become smaller. Power will shift to stakeholders, reducing the authority of the people who are supposed to lead the organization. … The time of the alpha male — of the dominant, typically male leader who knows everything, who gives direction to everybody and sets the pace, whom everybody follows because this person is so smart and intelligent and clever — this time is over. We need a new kind of leader who focuses much more on relationships and understands that leadership is not about himself. …

Such a leader doesn’t doesn’t put himself at the very center. He knows he needs to listen to other people. He knows he needs to be intellectually curious and emotionally open. He knows that he needs empathy to do the job, not just in order to be a good person. … We will see a significant decline in physical loyalty between people and organizations. It will be very difficult for leaders to formally bind people to their organizations, so they should not try. This is a battle that leaders can only lose. … What is clear is that leaders in the future need to have a full understanding, and also an emotional understanding, of diversity. That’s for sure. (more)

I call bull. Here’s Jeffrey Pfeffer, in Power:

Most books by well-known executives and most lectures and courses about leadership should be stamped CAUTION: THIS MATERIAL CAN BE HAZARDOUS TO YOUR ORGANIZATIONAL SURVIVAL. That’s because leaders touting their own careers as models to be emulated frequently gloss over the power plays they actually used to get to the top. Meanwhile, the teaching on leadership is filled with prescriptions about following an inner compass, being truthful, letting inner feelings show, being modest and self-effacing, not behaving in a bullying or abusive way— in short, prescriptions about how people wish the world and the powerful behaved. There is no doubt that the world would be a much better, more humane place if people were always authentic, modest, truthful, and consistently concerned for the welfare of others instead of pursuing their own aims. But that world doesn’t exist.

More from Pfeffer last November:

Today’s work world is increasingly populated by millennials with values presumably different from more-senior employees—more egalitarian, less competitive, more meritocratic, less accepting of hierarchy, and more tolerant of all forms of diversity. And if that’s true, surely companies are changing, which means we need new theories about power and influence to reflect these new cultural realities. Strategically expressing anger, building a power base, or eliminating rivals are considered outmoded ways of getting ahead. Certainly, the reasoning goes, in a world where reputations get created and transmitted quickly and anonymously through ubiquitous social networks, people who resort to such bad behavior will suffer swift retribution.

The typical Silicon Valley recruitment pitch, or something to this effect, reinforces this view: “We’re not political here. We’re young, cool, socially networked, hip, high-technology people focused on building and selling great products. We’re family-friendly, have fewer management levels and less hierarchy, and make decisions collegially.”

Unfortunately there’s not much evidence of change but plenty of testimony to the contrary: the power struggles that beset the founding of Twitter (TWTR), the turnover among CEOs at Hewlett-Packard (HPQ), and the experiences of former Stanford MBA students working in the supposedly egalitarian world of high tech who have lost their jobs or been thrown out of companies they founded notwithstanding their intelligence and good job performance. Meanwhile, relationships with bosses still go a long way to predict people’s career success; organizational gossip lives on; and career derailment still awaits those who fail to master political dynamics. (more)

### Fred Wilson

#### Changing Clocks

I was in an elementary school in Brooklyn the other day and the clocks in the halls were an hour off. It was really bothersome to me. Maybe that school does not observe daylight savings time, but more likely the janitor or whomever is responsible for changing the clocks could not be bothered. Of course, the clocks in that school are now set correctly.

I’m a bit OCD about changing the clocks in our house and our cars. I hate it when a clock is set to the wrong time. And, each and every clock has its own system for changing the time. The clock in our double hung oven in our kitchen has a particularly complicated system. I had to find the manual on the Internet and look up the technique this morning after The Gotham Gal and I spent a few minutes hitting all sorts of combinations of buttons and got nowhere.

And then there are the cars. Whomever teaches people how to design user interfaces for car dashboards must have a perverse sense of humor. Each and every car has a different system for changing the clock time and each one is more clunky than the next.

But I go through all these machinations every six months because I can’t stand having clocks with the wrong times on them. Thankfully more and more of the clocks in my life are connected to the Internet and update automatically. I wish the clocks in our cars, on our ovens, and in our elementary schools would do the same.

### TheoryOverflow

#### A Lambda calculus for invertible (r-Turing computable) functions

I'm interested in the concept of "r-Turing completeness", as defined by Axelsen and Glück (2011). A system is r-Turing complete if it can compute the same set of functions as a reversible Turing machine, without producing any "garbage" data. This is the same as being able to compute every function that is both (a) computable, and (b) injective.

I would like to computationally explore the space of computable injective functions. In order to do this I'm looking for the "most minimal" reversible programming language --- something that can play the equivalent role for r-Turing computability that the lambda calculus plays for Turing computability.

I know that there are many reversible languages that people have developed and proven to be r-Turing complete. However, these are being developed with practical applications in mind, and so their authors concentrate on giving them expressive features rather than making them minimal.

Does anyone know if such a minimal invertible language has been described, or whether there is any research in such a direction? I'm fairly new to the literature on this topic, so I could easily have missed it. Alternatively, does anyone have any insight into how such a language could be created?

Below is a summary of what I'm looking for. I do not know whether it can be created by modifying the lambda calculus itself, or whether a completely different type of language would have to be used.

• r-Turing complete language - computes all computable invertible functions, and can only computes invertible functions
• Syntax and semantics as minimal as possible. (E.g. Lambda calculus has only function definitions and applications, and nothing else.) It isn't necessary for the syntax or semantics to be related to those of the lambda calculus, although they could be.
• Program = data. That is, the programs operate on expressions rather than any other kind of data. This guarantees that a program's output can always be interpreted as a program. This probably implies that it has to be a functional rather than an imperative style of language.
• There is some systematic way to convert a program into its inverse, which doesn't involve substantially more computation than that involved in actually performing the inverse computation. (Not all invertible languages have this property, but some do.)

I should emphasise that Axelsen and Glück's approach to reversible computing is quite different from the well-known approach due to Bennett, where an (in general non-invertible) program is made invertible by returning some information about the computation's history along with the output. r-Turing completeness is about being able to compute injective functions without any additional output. There are several things called variations of "reversible lambda calculus" that are reversible in Bennet's sense - those are not what I'm looking for.

### StackOverflow

#### Errors while compiling project migrated to SBT - error while loading package and Assertions

I'm migrating a Scala application that compiles and runs fine by manually including jars in the classpath to a SBT build configuration.

My build.sbt is as follows:

name := "hello"

version := "1.0"

scalaVersion := "2.9.2"

libraryDependencies += "org.slf4j" % "slf4j-simple" % "1.6.4"

libraryDependencies += "junit" % "junit" % "4.11"

libraryDependencies += "org.scalatest" % "scalatest_2.10" % "1.9.2"

libraryDependencies += "org.hamcrest" % "hamcrest-all" % "1.3"

libraryDependencies += "ch.qos.logback" % "logback-classic" % "1.0.13"

libraryDependencies += "com.github.scct" % "scct_2.10" % "0.2.1"

libraryDependencies += "org.scala-lang" % "scala-swing" % "2.9.2"


When I compile it I get the following errors:

Loading /usr/share/sbt/bin/sbt-launch-lib.bash
[info] Set current project to hello (in build file:/home/kevin/gitrepos/go-game-msc/)
> compile
[info] Updating {file:/home/kevin/gitrepos/go-game-msc/}go-game-msc...
[info] Resolving org.fusesource.jansi#jansi;1.4 ...
[info] Done updating.
[info] Compiling 25 Scala sources to /home/kevin/gitrepos/go-game-msc/target/scala-2.9.2/classes...
[error] error while loading package, class file needed by package is missing.
[error] reference value <init>$default$2 of object deprecated refers to nonexisting symbol.
[error] error while loading Assertions, class file needed by Assertions is missing.
[error] reference value <init>$default$2 of object deprecated refers to nonexisting symbol.
[error] two errors found
[error] (compile:compile) Compilation failed
[error] Total time: 21 s, completed 09-Mar-2014 12:07:14


I've tried matching up the dependencies with the jar files I am using:

hamcrest-all-1.3.jar
logback-classic-1.0.13.jar
scalaedit-assembly-0.3.7(1).jar
scalatest_2.9.0-1.9.1.jar
slf4j-simple-1.6.4.jar
hamcrest-core-1.3.jar
logback-core-1.0.13.jar
scalaedit-assembly-0.3.7.jar
scct_2.9.2-0.2-SNAPSHOT.jar
junit-4.11.jar
miglayout-4.0.jar
scalariform.jar
slf4j-api-1.7.5.jar


### DragonFly BSD Digest

This week blew up with links fast.

Your unrelated video of the week: This trailer for Crawl.  This is a roguelike multiplayer cross-platform game, though I don’t know if it would work on BSD.  The important thing: the voiceover narration is fantastic.

### StackOverflow

#### Scala Seq.sliding() violating the docs rationale?

When writing tests for some part of my system I found some weird behavior, which upon closer inspection boils down to the following:

scala> List(0, 1, 2, 3).sliding(2).toList
res36: List[List[Int]] = List(List(0, 1), List(1, 2), List(2, 3))

scala> List(0, 1, 2).sliding(2).toList
res37: List[List[Int]] = List(List(0, 1), List(1, 2))

scala> List(0, 1).sliding(2).toList
res38: List[List[Int]] = List(List(0, 1))

scala> List(0).sliding(2).toList //I mean the result of this line
res39: List[List[Int]] = List(List(0))


To me it seems like List.sliding(), and the sliding() implementations for a number of other types are violating the guarantees given in the docs:

def sliding(size: Int): Iterator[List[A]]


Groups elements in fixed size blocks by passing a "sliding window" over them (as opposed to partitioning them, as is done in grouped.)

size: the number of elements per group

returns: An iterator producing lists of size size, except the last and the only element will be truncated if there are fewer elements than size.

From what I understand there is a guarantee that all the lists that can be iterated over using the iterator returned by sliding(2) will be of length 2. I find it hard to believe that this is a bug that got all the way to the current version of scala, so perhaps there's an explanation for this or I'm misunderstanding the docs?

I'm using "Scala version 2.10.3 (OpenJDK 64-Bit Server VM, Java 1.7.0_25)."

#### Play WebSocket client for load testing another play websocket server app

We have an existing play server app to which mobile clients talk via web sockets (two way communication). Now as part of load testing we need to simulate hundreds of client requests to the server.

I was thinking to write a separate play client (faceless) app and somehow in a loop make 100s of requests to a server app? Given that I am new to web sockets, does this approach sound reasonable?

Also what is the best way to write a faceless web socket client that makes web socket requests to a web socket server?

#### NoClassDefFoundError: scala/reflect/internal/settings/AbsSettings when "show compile:full-classpath" in Play project?

When I execute show compile:full-classpath in the Play console I get:

[error] (drivetag-play/compile:compile) java.lang.NoClassDefFoundError: scala/reflect/internal/settings/AbsSettings


I'm attempting to add scala-reflect-2.10.0 to the classpath because when I try to compile the whole project it gives me the same error. How can I solve this?

#### cascalog and clojure.core.matrix in the same namespace

I have this namespace

(ns my-namespace.blah
(:refer-clojure :exclude [* - + == /])
(:use clojure.core.matrix)
(:use clojure.core.matrix.operators)
(:use [cascalog.api])
)


When I load it in the terminal, I get

user=> (use 'my-namespace.blah)

IllegalStateException div already refers to: #'clojure.core.matrix/div in namespace: my-namespace.blah  clojure.lang.Namespace.warnOrFailOnReplace (Namespace.java:88)
user=>


What is this ?

#### Functional closures in python [on hold]

• Are there functional closures in python?
• How do they work?
• When are they useful?

I am pretty new at functional programming concepts.

#### Scala/Play/Akka: remote application to application communication

If I have two Scala/Play applications on different servers, what would be the best way for them to communicate for sending small bits of data both ways?

1. RESTful approach
2. Akka remote actors
3. Something else?

I was initially thinking about Akka remote actors, but there's one question that I can't find answer for: how is authorisation between the two applications handled in such a case?

#### How can I unit test this Play Scala controller using Specs2?

I have the following controller code which connects to a MongoDB instance, retrieves some data then maps the data to a JSON list of friends, how can I unit test it using Specs2?

object Friends extends Controller with MongoController {

def collection: JSONCollection = db.collection[JSONCollection]("friends")

def list = Action.async {
val cursor: Cursor[Friend] = collection.find(Json.obj()).cursor[Friend]

val futureFriendList: Future[List[Friend]] = cursor.collect[List]()

futureFriendList.map { friends => Ok(Json.toJson(futureFriendList)) }
}

}


### CompsciOverflow

#### simulation of Randomized distributed algorithms

I want to implement various Randomized Distributed algorithms for MIS, Broadcast, Coloring in Radio Networks. I want use very high level language or any simulator for this. Please suggest on this.

### StackOverflow

#### g8 ripla/vaadin-scala => scala.MatchError: 0.13.0

I would like to create a scala project with vaadin using giter8 but there's a problem:

new-host-3:sms oliviersaint-eve$g8 ripla/vaadin-scala Template for Vaadin Scala projects. package [com.example]: lorry.mars2013 name [Vaadin Scala project]: Test6 classname [VaadinScala]: Template applied in ./test6 new-host-3:sms oliviersaint-eve$ cd Test6
new-host-3:Test6 oliviersaint-eve$ls build.sbt project src new-host-3:Test6 oliviersaint-eve$ sbt
[error] scala.MatchError: 0.13.0 (of class java.lang.String)
[error] Use 'last' for the full log.


I am using a mac(mac os 10.7.5), with g8 0.5.3, sbt 0.13.0, and java 1.7.0_45.

can you help me?

new-host-3:poubelle oliviersaint-eve$sbt about [info] Set current project to poubelle (in build file:/Users/oliviersaint-eve/sms/poubelle/) [info] This is sbt 0.13.0 [info] The current project is {file:/Users/oliviersaint-eve/sms/poubelle/}poubelle 0.1-SNAPSHOT [info] The current project is built against Scala 2.10.2 [info] [info] sbt, sbt plugins, and build definitions are using Scala 2.10.2  Files/Directories in "test6": *build.sbt: name := "Test6" scalaVersion := "2.9.2" seq(webSettings: _*) resolvers += "Vaadin add-ons repository" at "http://maven.vaadin.com/vaadin-addons" // basic dependencies libraryDependencies ++= Seq( "com.vaadin" % "vaadin" % "6.8.2", "org.vaadin.addons" % "scaladin" % "2.0.0", "org.eclipse.jetty" % "jetty-webapp" % "8.0.4.v20111024" % "container" )  directories: *src/main/scala/lorry/mars2013/VaadinScalaApplication.scala: package lorry.mars2013 import vaadin.scala._ class VaadinScalaApplication extends Application("Test6") { override val main: ComponentContainer = new VerticalLayout { margin = true components += Label("This Vaadin app uses Scaladin!") } }  *src/main/scala/lorry/mars2013/VaadinScalaWidgetset.gwt.xml: <!-- Add widgetset modules from add-ons here. E.g. <inherits name="org.vaadin.teemu.ratingstars.gwt.RatingStarsWidgetset" /> --> </module>  *src/main/webapp/WEB-INF/web/xml: <?xml version="1.0" encoding="UTF-8"?> <web-app id="VaadinScala" version="2.4" xmlns="http://java.sun.com/xml/ns/j2ee" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://java.sun.com/xml/ns/j2ee http://java.sun.com/xml/ns/j2ee/web-app_2_4.xsd"> <display-name>Test6</display-name> <context-param> <description> Vaadin production mode</description> <param-name>productionMode</param-name> <param-value>false</param-value> </context-param> <servlet> <servlet-name>Test6</servlet-name> <servlet-class>com.vaadin.terminal.gwt.server.ApplicationServlet</servlet-class> <init-param> <description>Vaadin application class to start</description> <param-name>application</param-name> <param-value>lorry.mars2013.VaadinScalaApplication</param-value> </init-param> <!--<init-param> <description>Application widgetset</description> <param-name>widgetset</param-name> <param-value>lorry.mars2013.VaadinScalaWidgetset</param-value> </init-param>--> </servlet> <servlet-mapping> <servlet-name>Test6</servlet-name> <url-pattern>/*</url-pattern> </servlet-mapping> </web-app>  thanks! I did not very understood what you mean with text format; do you want me to provide you with links to the files in order to try by yourself(via "pastebin" for example)? ### Oleg Kiselyov #### Tagless-Staged: Combinators for Impure yet Hygienic Code Generation #### Streams of random data: a pitfall of lazy evaluation #### Higher-order, Typed, Inferred, Strict: ACM SIGPLAN ML Family Workshop Call for papers ### StackOverflow #### Why did replacing my Scala case class with an extractor break my higher order function? Suppose I have a simple case class that wraps integers, and a higher order method that accepts a function carrying integers to wrappers. case class Wrapper(x :Int) def higherOrder(f : Int => Wrapper) = println(f(42))  Then, I can call the higher order function, passing in the wrapper's generated apply function. Amazingly, I can also just pass in the wrapper's name. higherOrder(Wrapper.apply) // okay higherOrder(Wrapper) // okay, wow!  This is really cool. It allows us to treat the name of the case class as a function, which fosters expressive abstractions. For an example of this coolness, see the answer here. What does "abstract over" mean? Now suppose my case class isn't powerful enough, and I need to create an extractor instead. As a mildly contrived use case, let's say I need to be able to pattern match on strings that parse into integers. // Replace Wrapper case class with an extractor object Wrapper { def apply(x :Int) = new Wrapper(x) def unapply(s :String) :Option[Wrapper] = { // details elided } } class Wrapper(x :Int) { override def toString = "Wrapper(" + x + ")" // other methods elided }  Under this change, I can still pass Wrapper.apply into my higher order function, but passing in just Wrapper no longer works. higherOrder(Wrapper.apply) // still okay higherOrder(Wrapper) // DOES NOT COMPILE ^^^^^^^ // type mismatch; found : Wrapper.type (with underlying type Wrapper) // required: (Int) => Wrapper  Ouch! Here's why this asymmetry is troubling. The advice of Odersky, Spoon, and Venners (Programming in Scala, page 500), says You could always start with case classes, and then, if the need arises, change to extractors. Because patterns over extractors and patterns over case classes look exactly the same in Scala, pattern matches in your clients will continue to work. Quite true of course, but we'll break our clients if they are using case class names as functions. And since doing so enables powerful abstractions, some surely will. So, when passing them into higher order functions, how can we make extractors behave the same as case classes? ### QuantOverflow #### Is Behavioral Finance relevant to quants? This topic has been prompted by the following question: Measuring Behavioral Finance Effects in Fund/Portfolio Manager Analysis After reading it and the comments below I started thinking whether behavioral finance could be incorporated into pricing paradigms used by quants. • Couldn't option pricing to at least some extent benefit from it? E.g. with american options pricing - when pricing one mostly uses the continuation value to analyse whether the holder would exercise or not. • Does literature on the interfacing of behavioural finance and pricing exit? • How relevant is it in portfolio optimization ? • What about an application to credit risk modeling ? Or Risk-Management in general. In Basel III or Solvency II where the companies assets are projected into the future. This projections also include assumptions on how the management will act in certain situations. ### StackOverflow #### Testing Json in scala / play doesnt locate Formatter I am testing my Json writes/reads. From some reason, It cannot get the Format definition. Here is the test: @RunWith(classOf[JUnitRunner]) class ProductJsonSpec extends Specification with ProductFormats { "Json serialization" should { "For References" in { val refAsOb:References = new References(Some("1"), Some("1"), Some("1"), Some("1")) val refAsJson = Json.toJson(refAsOb) val json = Json.parse( """ { "references": { "configuratorId": "1", "seekId": "1", "hsId": "1", "fpId": "1"} }""") json must beEqualTo(refAsJson) } }  I have Models.scala where the class is located and defined: case class References(configuratorId: Option[String], seekId: Option[String], hsId:  In Models.scala there is the following trait: trait ProductFormats extends ErrorFormats { implicit val referenceFormat = new Format[References]{ def writes(item: References):JsValue = { Json.obj( "configuratorId" -> item.configuratorId, "seekId" -> item.seekId, "hsId" -> item.hsId, "fpId" -> item.fpId ) } def reads(json: JsValue): JsResult[References] = JsSuccess(new References( (json \ "configuratorId").as[Option[String]], (json \ "seekId").as[Option[String]], (json \ "hsId").as[Option[String]], (json \ "fpId").as[Option[String]] )) }  When i run this using play -> testOnly product.PRoductJsonSpec i get the following error: [info] ProductJsonSpec [info] Json serialization should [info] x For References [error] '{"references":{"configuratorId":"1","seekId":"1","hsId":"1","fpId":"1"}}' is not equal to '{}' (ProductJsonSpec.scala:26) [error] Expected: {} [error] Actual: {"references":{"configuratorId":"1","seekId":"1","hsId":"1","fpId":"1"}} [info] Total for specification ProductJsonSpec [info] Finished in 196 ms [info] 1 example, 1 failure, 0 error [error] Failed: Total 1, Failed 1, Errors 0, Passed 0 [error] Failed tests: [error] product.ProductJsonSpec [error] (main/test:testOnly) sbt.TestsFailedException: Tests unsuccessful [error] Total time: 4 s, completed Mar 9, 2014 12:23:26 PM  Whats weird ? this works when im not testing... #### Does Google Dart language support functional programming? Does Google Dart language allow for functional programming to occur. • Can functions be stored as variables (references). • Functional currying. • Lazy parameters. (not so important to me). • Clearly Dart is mutable data, so that rules out immutable. Other features of functional programming? #### How to add runtime annotations on scala synthetic methods? For example, with the following definitions class A class B extends A trait MyTrait { @MyAnnotation def foo(): A } class MyClass extends MyTrait { @MyAnnotation def foo() = new B }  Scala generates a synthetic method foo() that returns A in MyClass. Looking at the produced bytecode (with javap) i can see:  public net.bdew.mytest.test.B foo(); flags: ACC_PUBLIC Code: stack=2, locals=1, args_size=1 0: new #15 // class net/bdew/mytest/test/B 3: dup 4: invokespecial #19 // Method net/bdew/mytest/test/B."<init>":()V 7: areturn LocalVariableTable: Start Length Slot Name Signature 0 8 0 this Lnet/bdew/mytest/test/MyClass; LineNumberTable: line 22: 0 RuntimeVisibleAnnotations: 0: #13()  That's the real method and the annotation is present on it.  public net.bdew.mytest.test.A foo(); flags: ACC_PUBLIC, ACC_BRIDGE, ACC_SYNTHETIC Code: stack=1, locals=1, args_size=1 0: aload_0 1: invokevirtual #24 // Method foo:()Lnet/bdew/mytest/test/B; 4: areturn LocalVariableTable: Start Length Slot Name Signature 0 5 0 this Lnet/bdew/mytest/test/MyClass; LineNumberTable: line 20: 0  That's the synthetic, without an annotation. Is there any way to have it appear on it as well? Edit: to clarify MyAnnotation is defined in java with runtime retention. ### QuantOverflow #### An alternative to the Gaussian distribution to describe/fit market stock returns After the financial crisis in 2008, many people (including me) don't really believe that stock returns can be described in terms of the normal distribution (Gaussian distribution). But besides the Gaussian distribution, is there any other distribution that was found to be a better way of describing how the stock market behaves? ### StackOverflow #### Error when running sbt install-emulator I am following the video on this page http://zegoggl.es/2009/12/building-android-apps-in-scala-with-sbt.html which use SBT to create an Android project. However I get to the point of trying to install the emulator using sbt install-emulator  And I get the following error:  [info] Nothing to compile. [info] Post-analysis: 1 classes. [info] == tests / compile == [info] [info] == awesomepad / proguard == ProGuard, version 4.4 ProGuard is released under the GNU General Public License. The authors of all programs or plugins that link to it (sbt, ...) therefore must ensure that these programs carry the GNU General Public License as well. Reading program directory [C:\Projects\Scala\sbt2test\awesomepad\target\scala_2. 9.1\classes] java.io.IOException: Can't read [proguard.ClassPathEntry@550a17fb] (Can't proces s class [com/kickass/awesomepad/R$attr.class] (Unsupported version number [51.0]
for class format))


#### mapdb how to persist cross restart

I use mapdb as following

val mycache = DBMaker.newFileDB(new File(("/data/tmp/cache.db")))
.transactionDisable()
.make().getHashSet("")


then when i do

mycache.put(k1, v1)
assertTrue(mycache.get(k1), v1) // all is fine


however if i restart my server i do see i have cache.db on disk however it will have an empty map when reading

so

mycache.get(k1) // is null after restart


how can i have it reread my map after restart from file?

### TheoryOverflow

#### Low space computation and branching program

One of the most elemental result of relationship between boolean circuit size and polynomial uniform computation is Pippenger and Fishers simulation: $DTIME[T(n)]\subseteq SIZE[T(n)\log T(n)]$.

I want to consider the small branching program family simulating low space computation.

Question1: What is the best simulation bound $S_{1}(n)$ which is known at this moment such that $DSPACE [S(n)]\subseteq BP-SIZE[S_{1}(n)]$

Question2:What is the best time complexity bound as far as we know, for the function that $1^{n}\rightarrow$a branching program in the Question 1.

### StackOverflow

#### In sbt, how do I create a source dependency on a branch of a local git repository?

sbt has these syntaxes for source dependency projects:

RootProject(file("/a/b/c"))
RootProject(uri("git://github.com/a/b/c#some-branch"))


But I can't find any way to clone from a local git repository which doesn't require something ridiculous like running a git server. I would like to express the following, or a moral equivalent - moral equivalency means it should not require ssh or working DNS or a git server or even working resolution of "localhost", nor should it introduce any form of pointless build fragility. (Manually checking out the desired branch into another working dir and pointing sbt at that is an example of pointless build fragility.)

RootProject(file("/a/b/c#some-branch"))
// This seems like the most plausible syntax,
// but it explodes during cloning - "ssh: Could not resolve hostname git"
RootProject(uri("git:/a/b/c#some-branch"))


### CompsciOverflow

#### In Probabilistic Graphical Models, are Cliques and Clusters the same?

I am learning Probabilistic Graphical Models with the help of the videos on Coursera. I am in week 4 and I see cliques being mentioned often. But the graphs being discussed are cluster graphs. So are the cliques and clusters the same?

### Planet Clojure

#### Working with core.async: Exceptions in go blocks

Dealing with exceptions in go blocks/threads is different from normal clojure core. This gotcha is very common when moving your code into core.async go blocks -- all your exceptions are gone! Since the body of a go block is run on a thread pool, it's not much we can do with an exception, thus core.async will just eat them and close the channel. That's what happened in the second snippet in this post. The nil result is because the channel we read from is closed.

I find myself wanting to know the cause of problem at the consumer side of a channel. That means the go block needs to catch the exception, put it (the exception) on the channel before it dies. David Nolen has written about this pattern, and I've been using the proposed <? quite happily.

If you're interested in how some Go examples convert to core.async check out this repo.

#### Working with core.async: Chaining go blocks

One particularly annoying difference between the core.async and Go is that you can't wrap function calls with the go macro. This is due to implementation details of core.async, which can only see the body 'inside' the macro and not the functions it may call. This is obviously not a problem if the called function doesn't interact with any channels, but if it does when you might be in trouble. I've touched on this subject in a previous post.

Anyway, let me explain what I mean.

Let's say we have a complicated get-result function that hits some external services (waits for the result) and then feeds the input to a big calculation function multiple times. All examples below simplified for brevity.

This is all fine and well, but lets say the calculation function also needs to wait on some data, so it needs to become a go-routine as well. This means that we no longer have a return value but a channel with the result. Lets use some FP to get all the data out.

Nope, you can't to that, Assert failed: <! used not in (go ...) block. It's also 'returns' nil, explained in this post. Let's try another way;

Oh dear, 2 orders of magnitude and that warm fuzzy FP feeling is gone.

Since a go block returns a channel (with the result), you now have to deal with taking that value out of the channel. If you have long 'go-call-chains' of go blocks, you're going to spend lots of time in and out of channels. In this case we have lock congestion amongst all the calculation-go2 blocks and that single channel.

The nil returning snippet above can be written in a similar fashion using some of core.async's helpers functions (thanks to Ben Ashford for pointing this out);

Unfortunately this performs even worse than the written out go-loop, but it is much nicer.

## How is this any better in Go?

Here's a rough equivalent of the 2 scenarios in Go.

The key difference is that the caller put the function in a go block, and then any subsequent function are free to operate on any channel without itself being wrapped in go.

Also it performs better getResult2 is an order of magnitude slower than getResult.

## The blessings and curses of macros

If we have to wrap every function in a go block and if chaining go blocks is so slow, can we just inline that function in our outer go block somehow? Yes we can, we can turn that function into a macro.

Problem solved right? Well, not really. Instead of composable functions (well kind of since they return channels) we now have a special kind of macros that must be called from within a go block. In the snippet above we can't use the lovely (reduce ... (repeatedly calculation-go-macro)) form since we can't use macros that way. However, the macro itself can use <!, >! etc freely without the go wrapper and we solved the perf problem.

If you're interested in how some Go examples convert to core.async check out this repo.

#### Working with core.async: Blocking calls

You can't do anything even remotely blocking inside go-blocks. This is because all the core.async go blocks share a single thread pool, with a very limited number of threads (go blocks are supposed to be CPU bound). So if you have hundreds / thousands of go blocks running conurrently just having a few (a handful really) block -- all go blocks will stop! For a more in-depth explanation see this previous post.

But what is blocking anyway? If an API you are using claims to be non-blocking, is it really? Unfortunately this isn't black and white, some functions are more non-blocking than others. They can also become 'more blocking' by accident. One good example of this is when using the async APIs of any client that writes to sockets. When the network stack of the system is very stressed, these calls start slowly drifting towards more blocking -- with very bad effects in the core.async go thread pool.

The only way to be sure is to measure / profile the functions you call inside your go blocks, under different circumstances; different loads of internal and external systems. Here's a neat little trick I used to clearly mark the functions I suspect with meta data, and then instrument and profile while the system is running.

If you're interested in how some Go examples convert to core.async check out this repo.

### StackOverflow

#### Play 2.2.1: ambiguous implicit values for object PlayMagicForJava

I run Play Framework 2.2.1. I used to have only java controllers rendering templates. Now I am adding a Scala controller to render a new template indexScala.scala.html. The parameters list for indexScala.scala.html:

@()(implicit request: play.api.mvc.RequestHeader)


and it calls

@mainEmptyScala("blah", head) {}


The parameters list for mainEmptyScala.scala.html:

@(title: String, head: Html = Html(""))(body: Html)(implicit request: play.api.mvc.RequestHeader)


When I call indexScala template, I also declare request as implicit in the Scala controller. However, I got this compile error.

[error] ~/myapp/app/views/indexScala.scala.html:29: ambiguous implicit values:
[error]  and value request of type play.api.mvc.RequestHeader
[error]                               ^


I made sure that indexScala and mainEmptyScala templates are not called by any Java controller, so PlayMagicForJava shouldn't be used. Does anybody know how to resolve this compile error? Thanks.

### TheoryOverflow

#### Derandomizing Valiant-Vazirani?

The Valiant-Vazirani theorem says that if there is a polynomial time algorithm (deterministic or randomized) for distinguishing between a SAT formula that has exactly one satisfying assignment, and an unsatisfiable formula - then NP=RP. This theorem is proved by showing that UNIQUE-SAT is NP-hard under randomized reductions.

Subject to plausible derandomization conjectures, the Theorem can be strengthened to "an efficient solution to UNIQUE-SAT implies NP = P".

My first instinct was to think that implied there exists a deterministic reduction from 3SAT to UNIQUE-SAT, but it's not clear to me how this particular reduction can be derandomized.

My question is: what is believed or known about "derandomizing reductions"? Is it/should it be possible? What about in the case of V-V?

Since UNIQUE-SAT is complete for PromiseNP under randomized reductions, can we use a derandomization tool to show that "a deterministic polynomial time solution to UNIQUE-SAT implies that PromiseNP = PromiseP?

### StackOverflow

#### Can't find some Akka libs in Eclips

I have recently started exploring Scala. I have installed Eclips and I integrated Akka-Actors Libs in the Build-Path Project.

But whenever I try compiling the project, I got an erorr. I can't resolve such libraries.

import akka.routing.{Routing, CyclicIterator}
import Routing._


Any idea how to configure Akka to work perfectly with Eclips ?

### CompsciOverflow

#### What is the difference between RAM and TM

In case of algorithm analysis we assume a generic one processor Random Access Machine(RAM). As I know RAM is machine which is no more efficient than the Turing machine.All algorithms can be implemented in the Turing machine.So my question is if Turing machine is equally efficient as RAM, then why we are not assuming Turing machine for algorithm analysis.What is the difference between RAM and TM.

### /r/emacs

Hi! I recently upgraded to Mavericks and among the MANY issues I have run into, emacs taking very long times to load is the most annoying one.

I have installed different versions to confirm that is not the problem, including Aquaemacs and compiling from source this port.

I also checked running with the --debug-init flag but there was no difference.

I should say that I am not sure that this is a problem specific to emacs, but it is the only application that shows this behaviour. Coincidentally, the only other time something similar happened was while compiling the emacs port but I think that is likely to be related.

Has anyone experienced anything similar? Any help would be appreciated!

submitted by SonOfAragorn

### StackOverflow

#### How to execute "do" on elements in a sequence in clojure

I've been trying to figure out how to execute expressions that are stored as elements in a sequence. For example, here are two expressions in a sequence:

user=> (def z '((println 'x) 'y))
#'user/z
user=> z
((println (quote x)) (quote y))


Trying to use do or doall on them doesn’t seem to do anything

user=> (do z)
((println (quote x)) (quote y))
user=> (doall z)
((println (quote x)) (quote y))


The output I am trying to get is if I were to execute them not as a sequence but as a list of arguments

user=> (do (println (quote x)) (quote y))
x
y


I tried mapping eval but that gives me an extra nil and returns a list

user=> (map eval z)
(x
nil y)


Any help would be greatly appreciated!

#### How can i install python 2.7.3 on ubuntu using ansible

I am trying to installpython 2.7.3 from ansible on ubuntu 12

- name: Add snake repository
- name: Install postgresql
apt: pkg=python2.7 state=present
sudo: true
remote_user: vagrant


I get this error

TASK: [Add snake repository] **************************************************
failed: [192.168.0.28] => {"cmd": ["apt-key", "adv", "--recv-keys", "--keyserver", "hkp://keyserver.ubuntu.com:80", "FF3997E83CD969B409FB24BC5BB92C09DB82666C"], "failed": true, "rc": 2}
stderr: gpg: requesting key DB82666C from hkp server keyserver.ubuntu.com
gpg: no writable keyring found: eof
gpg: error reading [stream]': general error
gpg: Total number processed: 0

stdout: Executing: gpg --ignore-time-conflict --no-options --no-default-keyring --secret-keyring /etc/apt/secring.gpg --trustdb-name /etc/apt/trustdb.gpg --keyring /etc/apt/trusted.gpg --primary-keyring /etc/apt/trusted.gpg --recv-keys --keyserver hkp://keyserver.ubuntu.com:80 FF3997E83CD969B409FB24BC5BB92C09DB82666C

msg: gpg: requesting key DB82666C from hkp server keyserver.ubuntu.com
gpg: no writable keyring found: eof
gpg: error reading [stream]': general error
gpg: Total number processed: 0

FATAL: all hosts have already failed -- aborting


what is apt_key , how to get it

### Planet Theory

#### Codes Meet Numbers

Results and problems about primes that make us think of coding theory

 Cropped from source.

Zhi-Wei Sun is a professor in Mathematics at Nanjing University in China. He is the Editor in Chief of the recently-founded Journal of Combinatorics and Number Theory. His homepage is unique in prominently featuring a long list of

• …not his awards,

• …not his papers,

• …not his theorems,

• …but rather his conjectures.

Indeed we count 432 total conjectures in his list, subtracting one that he seems last year to have proved. They do not seem to be easy conjectures—this one implies the Riemann Hypothesis in a nontrivial way. Some of them involve powers of 2, which lend them a coding-theory flavor.

Today Ken and I wish to share ideas of using coding theory to prove number-theoretic results.

Coding theory owes much to Richard Hamming, who defined Hamming distance between codewords and created the binary Hamming codes. The Hamming sphere of radius ${d}$ centered on a word ${x}$ is the set of all words ${y}$ of the same length whose Hamming distance is at most ${d}$, meaning ${x}$ and ${y}$ differ in at most ${d}$ places. Our question is how the distributions of prime numbers and other sets relate to the Hamming spheres of their binary encodings. For example, ${5 = 101}$ and ${7 = 111}$ are twin primes of Hamming distance ${1}$, while ${11 = 1011}$ and ${13 = 1101}$ have distance ${2}$.

Sun has his own “Super Twin Prime Conjecture” listed first on his page. Call a pair ${(k,m)}$ of integers “super” if ${\pi(k)}$ and ${\pi(\pi(m))}$ are both twin primes, indeed the lower member of a twin pair, where ${\pi(k)}$ denotes the ${k^{th}}$ prime number. The conjecture—a hybrid of Twin and Goldbach—is:

Every integer ${n \geq 3}$ is the sum of a super pair.

He has verified this for ${n}$ up to ${10^9}$. What can motivate such a conjecture? Certainly one motivation is expected density—not only “should” the twin primes be infinite, but they should be dense enough to fill this kind of requirement, much as the set of primes themselves is dense enough to make Goldbach’s conjecture—that every even number ${n \geq 4}$ is the sum of two primes—plausible. But what other structure can supply motivation? That is what we seek.

## Numbers and Codes

Coding theory is all about the properties of vectors over finite sets, often over the binary field ${\{0,1\}}$. Of course number theory is all about the additive and multiplicative properties of integers. These seem to be vastly different, yet they are related: every number can be represented by a binary vector.

Coding theory gives a way to express relationships that might be cumbersome with arithmetical tools such as congruences. For example, say a number ${r}$ is “top modulo ${2^m}$” if its ${m}$-bit encoding begins with a ${1}$. Of course we can specify this as ${2^{m-1} \leq r < 2^m}$, but “less-than” is a dodgy concept when working modulo some number. When ${m}$ is odd we might want to define instead that the middle bit of ${r}$ in binary is a ${1}$. Middle bits are sometimes important in crypto, but relations involving them are not always easy to define via congruences.

Of course a number is odd iff its last bit in binary is ${1}$, and is congruent to ${3}$ mod ${4}$ iff its second-last bit is also ${1}$. The distinction between congruence to ${1}$ versus ${3}$ mod ${4}$ is generally important for odd primes. How about congruence to ${5}$ or ${7}$ mod ${8}$, versus ${1}$ or ${3}$, that is being top mod ${8}$? Of course one important distinction is which congruences are quadratic residues, but with binary-code notions we can define others.

The number ${71 = 1000111}$ is congruent to ${7}$ mod ${8}$, and is part of a twin pair of Hamming distance ${3}$ with ${73 = 1001001}$. The first twin pair with greater Hamming distance actually gives distance ${6}$: ${191 = 10111111}$ and ${193 = 11000001}$. Next comes ${2687 = 10101111111}$ and ${2689 = 10110000001}$ for distance ${7}$. Is the Hamming distance of twin primes unbounded?

Of course we don’t even know if there are infinitely many twin primes. This is really asking whether twin primes can flank a multiple of an arbitrarily large power of ${2}$. Quick web searches have not found this question, while our point is that our motive came from the coding-theory angle.

## Polignacs and Twins and Spheres

In 1848, Camille Marie de Polignac somewhat lazily conjectured that every odd number ${n}$ is the sum of a prime number and a power of ${2}$. We speculate that he may have intended to exclude false cases such as ${n = 127}$ where ${n}$ itself is prime, but even so he missed ${n = 905}$ amid his reported (but to us incredible) claim to have verified this up to about 3 million. Perhaps it was spoken with the bluster of a Confederate major-general during the Civil War, which is surprisingly what this French nobleman became.

His older brother, Alphonse de Polignac, made a somewhat less lazy conjecture: that every even number ${k}$ is the difference between infinitely many pairs of consecutive primes. Of course with ${k=2}$ this subsumes the Twin Primes Conjecture, and indeed the latter is sometimes called de Polignac’s Conjecture after him.

Should Camille have teamed up with his brother to make his conjecture? And would they have done better if they had been twins—maybe prove something about their conjecture? Well we have an example to go by: Zhi-Wei Sun has a twin brother, Zhi-Hong Sun at Huaiyin Normal University, and they teamed up in 1992 to prove something about the following conjecture by Donald Wall:

There are infinitely many primes ${p}$ such that either ${p \equiv \pm 1}$ modulo ${5}$ and ${p^2}$ divides the Fibonacci number ${F_{p - 1}}$, or ${p \equiv \pm 2}$ modulo ${5}$ and ${p^2}$ divides the Fibonacci number ${F_{p + 1}}$, where ${F_n}$ is indexed with ${F_0 = 0}$, ${F_1 = 1}$.

What Sun and Sun proved is that any minimal counterexample to Fermat’s Last Theorem would need to involve such a prime—of course from the proof of FLT two years later, we know there are none. They also gave a sufficient condition for a “Wall-Sun-Sun prime” to exist, though none has yet been found.

Back to the Polignacs, we can try to capture ideas of both their conjectures with a case of what is actually a pretty general kind of question—a kind one can pose about other sets of numbers besides the primes:

What is the minimum ${d}$ such that every odd number is within the distance-${d}$ Hamming sphere of a prime number? Is it finite?

To get the even numbers too we can add ${1}$ to ${d}$. Of course this is still a strong “every” kind of conjecture, and those are hard to prove. One can first try to attack “infinitely-many” versions. Obviously there are infinitely many odd numbers ${q}$ that are ${\pm 2}$ from a prime ${p}$, but if we insist that ${q}$ too be prime we have our old friend the Twin Prime Conjecture again. So here is the corresponding Hamming-like question:

What is the minimum ${d}$ such that there are infinitely many prime numbers ${q}$ that are within Hamming distance ${d}$ of some other prime number?

Using Hamming’s own ideas in coding theory, we can prove the minimum is at most ${d=2}$. Note that this is stronger than saying there are infinitely many pairs of primes ${(p,q)}$ such that ${q - p = 2^k \pm 2^l}$, because we are also restricting what ${p}$ and ${q}$ have in the bit positions ${k}$ and ${l}$ from the end.

## The Proof

The theorem is not that amazing, or unexpected, but we think how we prove it may be of interest. The proof is via Hamming’s famous theorem on the density of codes. Let ${A_{q}(n,d)}$ be size of the largest set ${S}$ of ${n}$ vectors over an alphabet of size ${q}$ so that any two distinct code words in ${S}$ are at least Hamming distance ${d}$ apart.

Theorem 1 For all ${n}$ and ${q}$ and ${d}$:

$\displaystyle A_{q}(n,d) \le \frac{q^{n}}{\sum_{k=0}^{t} \binom{n}{k}(q-1)^{k}},$

where

$\displaystyle t = \left\lceil \frac{d-1}{2} \right\rceil.$

Now to state formally what we are proving, it is:

Theorem 2 For every sufficiently large ${n}$, there are primes ${p,q}$ with ${2^n \leq p,q < 2^{n+1}}$ such that ${p}$ and ${q}$ have Hamming distance at most ${2}$.

Proof: Consider the set ${S}$ of all primes in the interval ${[2^{n}, \dots, 2^{n+1}-1]}$. These of course can be represented by ${n}$-bit vectors, eliding the leading ${1}$ in the ${2^n}$ place. Think of them as a code. We will show that its minimum distance ${d}$ is at most ${2}$, from which the theorem follows.

Suppose that the distance is larger, that is ${d \ge 3}$. The apply Hamming’s Theorem for ${A_{2}(n,d)}$ noting that ${t \ge 1}$. This yields

$\displaystyle |S| \le \frac{2^{n}}{\sum_{k=0}^{t} \binom{n}{k}} \le \frac{2^{n}}{1+n}.$

The Prime Number Theorem states that the density of primes up to ${N}$ converges to ${N/\ln N}$ as ${N \longrightarrow \infty}$, where ${\ln}$ is natural log. By an easy manipulation of estimates it follows that for any ${\epsilon > 0}$ and all large enough ${N}$, the primes have density at least ${(1-\epsilon)\frac{1}{\ln N}}$ in ${[N/2,\dots,N-1]}$. It follows with ${N = 2^{n+1}}$ that

$\displaystyle |S| \geq (1-\epsilon)\frac{2^{n}}{(n+1)\ln 2}.$

Since ${1/\ln 2 = \log_2 e = 1.44\dots}$, this implies with the above that

$\displaystyle \frac{1.44(1-\epsilon)2^n}{n+1} \le \frac{2^n}{n+1},$

but this is clearly false for large enough ${n}$ and small enough ${\epsilon}$. This contradiction proves ${d \leq 2}$ and hence the theorem. $\Box$

As the proof shows, there is actually a fair bit of “slack” in the counting. Hence the theorem can be extended to add conditions on the primes: we can further restrict the sizes of the primes, their residues modulo small numbers, and other properties. Indeed the counting makes this all close to working with ${d=1}$. That takes us back within the sphere of Camille de Polignac’s question as well.

## An Obstinate Question?

For ${d=1}$ the question again comes in “every” (with the qualifier “large enough”) and “infinitely many” flavors:

1. Is it true that for every large enough prime ${p}$, there is a prime ${q}$ such that ${p}$ and ${q}$ differ in one bit, or at least differ by a power of ${2}$? (no)

2. Are there infinitely many primes ${p}$ such that for some other prime ${q}$, ${p}$ and ${q}$ differ in one bit, or at least ${q - p}$ is a power of ${2}$? (open)

Note that ${q > p}$ is allowed. Without that condition we’d have already the counterexample ${p = 127}$ to Camille’s conjecture. Incidentally, counterexamples to Camille are called obstinate numbers, and there are various pages devoted to enumerating them. A case where ${q > p}$ is important is ${p = 113,\!921}$: ${q = p + 2^{141}}$ is prime. Of course whenever ${q \geq 2p}$ such a pair also has Hamming distance ${1}$, using leading ${0}$‘s to pad ${p}$ to the same length as ${q}$.

In 1964, Fred Cohen and John Selfridge noted that allowing ${q > p}$ made Camille’s idea good for every odd number up to ${262,144}$. However, they proved that the prime

$\displaystyle p = 47,\!867,\!742,\!232,\!066,\!880,\!047,\!611,\!079$

cannot be written as ${p = q \pm 2^k}$ with ${q}$ prime. Moreover, they gave an infinite arithmetic progression of numbers ${a + bn}$ with ${a,b}$ coprime that cannot be written as ${\pm 2^k \pm q^l}$ with ${q}$ prime. Since every such progression contains infinitely many primes, this finally lays question 1 to rest even with the “large enough” modifier. Zhi-wei Sun himself made good on something stated in their abstract but not proved in their paper, in a 2000 paper giving an infinite arithmetic progression of numbers that cannot be written as ${p^k \pm q^l}$ with ${p,q}$ prime and ${k,l > 0}$.

All this still, however, leaves the second question open. We would like to prove it, indeed find moderately dense sets of such pairs.

## Open Problems

Are there infinitely many pairs of primes that differ in just one bit?

We note that there are people who have thought about connections between coding theory and number theory. For example, Toyokazu Hiramatsu and Günter Köhler have a whole monograph titled Coding Theory and Number Theory on this topic. But the idea there is to apply number theory to shed light on the structure of codes. Elliptic curves, for instance, can be used to construct certain interesting codes. We are interested in the impact of coding theory on number theory, such as the distribution of important sets like the primes, which in turn may have applications in computing theory.

[$2^k + 2^l$ changed to $2^k \pm 2^l$]

### StackOverflow

#### Erlang function calls - Why does it not work to call self() as a direct argument?

Me and my friend was trying to find a bug with our code for some time when we realised that this was the problem:

random_function(spawn_link(fun() -> worker(List, self(), Death) end));


This was the solution:

PID = self(),
random_function(spawn_link(fun() -> worker(List, PID, Death) end));


So my question is, why did it not work to just call the self()-function straight away like that? Is it because it's an anonymous function or is it some kind of special Erlang thing?

### StackOverflow

#### Have sbt put javadocs and sources of dependencies on the class path

"mygroup" % "mymodule" % "myversion" withJavadoc() withSources()


But these jars don't seem to be on the runtime classpath?

What I would like to do, is access the javadocs and sources from my application. Can I make these jars appear as managed resources, such that I could do

ClassLoader.getSystemClassLoader.getResource("/my/package/MyDependency.scala")


?

### QuantOverflow

#### Do intraday volume and volatility share the same properties?

volatility clustering and mean reversion are very well known properties that one could use when trading. Traders, especially in options world, do take realized vol into account (e.g. by forecasting it or looking at which percentile does the current volatility correspond).

I am wondering if also intraday volumes have the same kind of properties that can be exploited somehow.

I see that some traders look at volume profiles and use indicators like VWAP (volume-weighted averaged price) and PVP (peak volume price, thus the price where the largest intraday volume was traded). In general they assume that intraday volumes tend to generate a symmetric distribution thus following this kind of rule to forecast the price direction:

if the PVP>VWAP then the volumes distribution is skewed upside and this generates a "pressure" to prices to move downwards, at least untile the VWAP. With PVP

There is an exception to this rule: when the price action is on one the extremes of the volume distribution (e.g. price>PVP>VWAP) then the previous logic doesn't apply (even if the PVP>VWAP).

Is there any statistical evidence that intraday volumes actually tend to generate symmetric distributions thus making it possible to exploit the temporary skewness that is generated intraday?

Is there any study on that or anyone willing to share her/his experience on that?

### Planet Theory

#### Induced Clebsch subgraphs

Today I've been playing with the induced subgraphs of the Clebsch graph. Several other interesting and well-known graphs can be obtained from it by deleting a small number of vertices and forming the induced subgraph of the remaining vertices.

To begin with, one simple construction of the Clebsch graph is to take all length-four binary strings as vertices, and to make two strings neighbors when they differ either by a single bit or by all four bits. So it has sixteen vertices and 40 edges, and can be visualized as a four-dimensional hypercube with the long diagonals added.

It can be split into two three-dimensional cubes, by partitioning the binary strings according to the value of their first bit. Since the cube is bipartite, we can color the whole Clebsch graph with four colors, by using two colors within each cube. This is optimal, because the largest independent sets in the Clebsch graph (the sets of neighbors of a single vertex) have five vertices, not enough for three of them to cover the whole graph.

The Clebsch graph can also be split in a different way into two copies of the Wagner graph:

Removing a vertex and its five neighbors from the Clebsch graph (a K1,5 subgraph) leaves a copy of the Petersen graph as the remaining graph. Similarly removing a maximum independent set leaves a Petersen graph together with a single isolated vertex.

Removing a five-vertex cycle (such as the central cycle of the Petersen graph above) leaves a copy of the Grötzsch graph:

Removing a four-vertex maximal independent set (four independent vertices that do not have a single common neighbor) leaves a subdivision of the complete bipartite graph K4,4, in which four edges forming a matching have been subdivided:

Removing a four-cycle leaves a twelve-vertex torus graph with 24-way symmetry:

Here are a couple of less-symmetric large planar induced subgraphs.

The second of these planar graphs can be formed by removing all binary strings with equal numbers of zeros and nonzeros, a six-vertex subset that induces a three-edge matching. If we instead partition the vertices into strings with even and odd parity, we get a partition of the Clebsch graph into two eight-vertex induced matchings.

The Schläfli graph (or its complement) has many induced Clebsch graphs inside it, so presumably it also has an interesting collection of symmetric induced subgraphs. But that's getting a bit too large for the sort of by-hand analysis I did to get the list here; it calls for automation instead.

### StackOverflow

#### How do I know what version of scala the maven scala plugin uses?

I am trying out spring mvc using scala, and the compiler is 2.10.3

I am using another scala library that was built using 2.9 and it giving me an error like

java.lang.NoClassDefFoundError


when I use it in my spring mvc app. I'm guessing it is because I compiled it with an older version of scala.

I am using the latest scala maven plugin 2.15.3. When I run mvn package, how do I know which scala compiler version it is using?

I have 2.10.3 installed in /usr/local/opt/scala

### TheoryOverflow

#### Fourier Analysis on checking whether there exists a vector in hypercube orthogonal to a set of vectors?

I know virtually noting about Fourier Analysis and I'd like to know whether it's worth to learn this topic for my problem.

My problem is: Given vectors $h_1,\cdots,h_k\in\{+1,-1\}^n$ where $k < n$, decide whether there exists another vector $x\in\{+1,-1\}^n$ that is orthogonal to all of $h_1,\cdots,h_k$.

I think I may formulate my problem with the help of a quadratic function $f(x) = x^T\left(\sum_{i=1}^k h_i h_i^T\right)x$ to have a condition that

$f(x) = 0$ if and only if $x$ is orthogonal to $h_1,\cdots,h_k$.

As the desired function is a boolean function that $f:\{+1,-1\}^n\to\mathbb{R}$, it may be uniquely represented by a certain Fourier expansion . Then my job is to check whether this function has zero Fourier coefficients or not.

I know there can be $2^n$ Fourier coefficients. And it may be impossible to check whether a function has a Fourier expansion with zero coefficients. Does this approach make sense? Is there any reference / related work on this kind of problem?

### CompsciOverflow

#### A NPDA for the language $L = \{w \mid w \in \{a,b,c\}^*, n_c(w) = n_a(w) + n_b(w)\}$

Consider the language $L = \{w, w \in \{a,b,c\}^*, n_c(w) = n_a(w) + n_b(w)\}$, where $n_q(\omega)$ is defined to be "the number of $p \in \omega$.

We know that there exists some NPDA for the language, right? If so, I can't seem to find it, and in each instance I'm just missing something, and it is driving me nuts.

Consider these, seemingly poor, attempts at L:

1. Nope, nope, nope.
2. NOPE, NOPE, NOPE.
3. N0$\rho \epsilon$

Could someone please feel sorry enough for me to throw me a bone here? I woke up at 8 and have been working on this for nearly 12 hours. In general, what kind of structure follows from the idea that for a set of symbols somewhere in $\Sigma^*$ whose instances can appear anywhere and in any order, we want to count some potentially infinite quantity of those suckers and see if they match up.

I have a gut feeling that I need to exploit the fact that $0=n_a(w)+n_b(w)-n_c(w)$ but I have absolutely the faintest idea how to construct such a marvelous machine.

## Yippee!

I think this one actually works...

### StackOverflow

#### Write a function that takes a function with specific parameters as parameter

This title must be confusing, but basically I have a lot of functions that take one List[Double] as a parameter and return a Double. I want to make a function that only lets me take in functions that take a List[Double] and return Double.

I tried

private def testforNull(func(list: List[Double]): => Double)


but this gives me error. Can someone point me on the right way of doing this?

#### How to allow optional outermost parenthesis?

I am writing a parser for certain expressions. I want to allow parentheses to be optional at the outermost level. My current parser looks like this:

class MyParser extends JavaTokenParsers {

def expr = andExpr | orExpr | term
def andExpr = "(" ~> expr ~ "and" ~ expr <~ ")"
def orExpr = "(" ~> expr ~ "or" ~ expr <~ ")"
def term = """[a-z]""".r
}


As it is, this parser accepts only fully parenthesized expressions, such as:

val s1 = "(a and b)"
val s2 = "((a and b) or c)"
val s3 = "((a and b) or (c and d))"


My question is, is there any modification I can make to this parser in order for the outermost parenthesis to be optional? I would like to accept the string:

val s4 = "(a and b) or (c and d)"


Thanks!

### Fefe

#### Was macht der Verfassungsschutz eigentlich, wenn sie ...

Was macht der Verfassungsschutz eigentlich, wenn sie nicht gerade Beweise für ihr Versagen shreddern, Linkspartei-Abgeordneten die Akteneinsicht verweigern oder XKeyscore von der NSA einsetzen? Sie lassen einen ihrer Nazi-V-Leute einen Amtsgerichtsdirektor terrorisieren.

#### Wieso sind die Amerikaner eigentlich so unter Druck ...

Wieso sind die Amerikaner eigentlich so unter Druck bezüglich der Ukraine? Ob man jetzt Putin jetzt ärgert oder in sechs Monaten, wieso ist das so wichtig für die Amis? Und selbst wenn die Ukraine unter russischer Kontrolle geblieben wäre, na und, die Amis haben doch schon alle anderen Länder in der Gegend an die Nato angeschlossen. Wieso drängelt das jetzt so, dass sie der Ukraine dieses bekloppte Ultimatum reindrücken mussten?

Hier ist ein Erklärungsversuch: Ende letzten Jahres hat Chevron einen 10-Milliarden-Deal mit der Ukraine gemacht. Dabei geht es um die Förderung von Schiefergas. Einen ähnlichen Deal hat Royal Dutch Shell unterschrieben. Das könnte erklären, wieso die EU mitgedrängelt hat.

Schiefergas wird mit Fracking gewonnen. Wenn man auf den Umweltschutz scheißt, ist Fracking auf dem Papier auch echt super — Europa zahlt an die Russen mehr als das Doppelte dessen für Erdgas was die Amis für ihr per Fracking gefördertes blechen. Die Ukraine kriegt zwar von den Russen 33% Rabatt, aber liegt damit immer noch über 50% über dem, was die Amis für ihr Fracking-Gas zahlen.

Die Russen haben schon die zweitgrößte Ölraffinerie der Ukraine gekauft, und arbeiten gerade an der Übernahme der größten.

Das Erdgas, das Europa bei den Russen kauft, geht übrigens zu einem signifikanten Teil durch die Ukraine, die sich da historisch gesehen immer gerne mal ein bisschen auf dem Weg abgezwackt hat. Ein Spielchen, das auch die anderen Länder auf dem Weg spielen, insbesondere Weißrussland und Polen. Daher hat Russland ja die Pipeline durch die Ostsee gebaut und zur Chefsache von Ex-Kanzler Schröder gemacht.

So und jetzt gucken wir uns mal die Karte mit den Erdgasvorkommen in der Ukraine an. Wie ihr sehen könnt, gibt es zwei große Vorkommen, und das größere ist am östlichen Rand. Das ist zufällig (nach der Krim) auch der Teil der Ukraine mit dem größten Anteil an russischsprachigen Einwohnern.

An der Karte fallen auch noch die fetten Vorkommen unter dem Baltikum auf, und die unter Polen. Das waren auch mit die ersten Länder, die sich die Nato gekrallt hat. Freiheit! Demokratie! Menschenrechte!1!!

In a speech at the National Press Club in Washington DC last December as Ukraine's Maidan Square clashes escalated, Nuland confirmed that the US had invested in total "over $5 billion" to "ensure a secure and prosperous and democratic Ukraine" - she specifically congratulated the "Euromaidan" movement. Nuland war die hier, die mit dem "fuck the EU". ### StackOverflow #### Scala/Play: Assigning a value that's not there yet Say I have something like this: case class User(id:Option[Long], name: String) case class Account(id:Option[Long], userId: Long) object Account { // apply method def apply(i: Identity): Account = { Account(None,SomeFutureUserId) } }  SomeFutureLong is supposed to be some User's id but the I can't do: Account = Account(None,SomeUser.id) How would I let Play! know that there will be a Long type in place of SomeFutureUserId? Something like a placeholder? ### Planet Clojure #### Building a Database-backed Clojure Web App… From the post: Some time ago I wrote a post about Java In the Auto-Scaling Cloud. In the post, I mentioned Heroku. In today’s post, I want to take time to point back to Heroku again, this time with the focus on building web applications. Heroku Dev Center recently posted a great tutorial on building a databased-backed Clojure web application. In this example, a twitter-like app is built that stores “shouts” to a PostgreSQL database. It covers a lot of territory, from connecting to PostgreSQL, to web bindings with Compujure, HTML tempting with Hiccup and assembling the application and testing it. Finally, deploying it. If you aren’t working on a weekend project already, here is one for your consideration! ### /r/compsci #### [Comp. Geometry]Transformation for reflection about a line I'm not sure whether this should be here or in math or geometry subreddits, so warn me if I'm in the wrong subreddit. For a piece of homework, I have to "describe the transformation M which reflects an arbitrary point P about a line L(of which the y-intercept and angle of inclination with respect to the x-axis are given)" I imagine I can do it by finding the x-intercept, translating the point on the x-axis by that amount, rotating by twice the angle vector P makes with the line L, then translating back; but I'm wondering if there is a better or simpler solution. submitted by oselcuk [link] [2 comments] ### Planet FreeBSD #### bhyvecon 2014 bhyvecon 2014 (http://bhyvecon.org/), SAKURA Internet Research Center, Tokyo, Japan 12 March, 2014. See the bhyve hypervisor in action and ask a core bhyve developer your technical questions. ### StackOverflow #### What is the best way to check the type of a Scala variable? [duplicate] This question already has an answer here: Is there a simple way to determine if a variable is a list, dictionary, or something else? Basically I am getting an object back that may be either type and I need to be able to tell the difference. In Python, we have "Type()", "Typeof()" that scala> val c: String = "Hello world"  Is there any way to determine : Typeof(c) to print : String ### CompsciOverflow #### How turing machine can be used as enumerator Currently I am studying Turing machines and I understand that a Turing machine can produce all the strings of the language accepted by that particular Turing machine. We call such a Turing machine an enumerator. I have studied the formal definition of enumerators that have no input. But I am unable to realize how a Turing machine can work as an enumerator for the language$L=\{a^nb^n:n\geq0\}$. Any help would be greatly appreciated. ### TheoryOverflow #### Grover's search algorithm for 3 coloring According to Arora & Barak (pdf), pg. 186, for a polynomial-time computable function$f: \{0,1\}^n \to \{0,1\}$(represented as a circuit computing$f$), Grover's algorithm finds in$O(\text{poly}(n)2^{n/2})$time a string$a$such that$f(a) = 1$(if such a string exists). My question is an application of Grover's algorithm to 3 coloring. How can you show, using Grover's algorithm, that 3-coloring can be solved on a quantum computer in time$O(\text{poly}(n)2^{n/2})$, where$n$is the number of vertices? This is not a direct application of the algorithm, since although you can easily encode each of the$3^n$(valid and invalid) colorings of the$n$vertices using$\log_2(3^n) = n\log_2(3) = O(n)$bits, this means that Grover's algorithm gives a run time of$O(\text{poly}(n)2^{n\log_2(3)/2}) = O(\text{poly}(n)3^{n/2})$. So maybe, you would need to show that a coloring (possibly valid) can be encoded using only$n + O(1)$bits? How would you show that? ### CompsciOverflow #### Context-free grammar for$L = \{a^n : n\leq2^{20}\}$I want to find a context-free grammar for$L = \{a^n : n\leq2^{20}\}$. There's one for sure. I approached it by two ways and both seemed dead end. One was to set a limit during the production of the new strings. But I don't think there's such a thing in CFGs. Second approach was to produce the strings of the language top-down. Starting from the last string$a^{2^{20}}$and removing an$a$each time till epsilon but I don't think that's achievable either. Any ideas? Thanks in advance. ### TheoryOverflow #### How does binary addition work? [migrated] Binary, for one, I have found confusing no matter what. I watch minecraft redstone videos on binary adders, videos on real binary adders, diagrams, etc... I have not learned much at all. How does electrons flowing through wires made of gold "add/subtract" to make numbers through some logic gates!? ### Planet Emacsen #### Ivan Kanis: Gnus simple split unexpected feature I used to use the fancy split. Some months ago I reduced my inboxes a lot and felt fancy split wasn't needed anymore. I ended up with something like this: (setq nnmail-split-methods '(("linux-nantes" "\$$[tT]o\\|[cC]c\$$:.*linux-nantes@univ-nantes\\.fr") ("bbdb" "\$$[tT]o\\|[cC]c\$$:.*bbdb-info@lists\\.sourceforge\\.net") ("stumpwm" "\$$[tT]o\\|[cC]c\$$:.*stumpwm-devel@nongnu\\.org") ("interesting" private-is-email-in-bbdb) ("spam" "X-Spam-Flag: YES") ("boring" "")))  The idea is to put each mailing list in its separate boxes. Interesting e-mail are based on my bbdb database queried by the function private-is-email-in-bbbd. It worked as I expected except when they were more than one positives. For example, I had someone I know in my bbdb post to the Linux Nantes mailing list. What happened is that Gnus will create an e-mail on each inbox! Clearly that's not what I want. So I am using again the fancy splitter which does exactly what I expect. The (| statement indicates to stop a the first positive. (setq nnmail-split-methods 'nnmail-split-fancy nnmail-split-fancy '(| (to "linux-nantes@univ-nantes\\.fr" "linux-nantes") (to "bbdb-info@lists\\.sourceforge\\.net" "bbdb") (to "stumpwm-devel@nongnu\\.org" "stumpwm") (: private-is-email-in-bbdb) ("X-Spam-Flag" "YES" "spam") "boring"))  Note that the syntax is much easier to read. There is no hairy regexp to match To and Cc, all that is handled by the to symbol. I had to modify private-is-email-in-bbdb to return "interesting" instead of t on success. ### HN Daily #### Daily Hacker News for 2014-03-08 The 10 highest-rated articles on Hacker News on March 08, 2014 which have not appeared on any previous Hacker News Daily are: ## March 08, 2014 ### /r/compsci #### PhD funding cut--where to turn? Hi--I give English lessons to a Portuguese PhD student. The economy here is beyond apocalyptic and he will likely be losing his funding at the end of this month. He is 1.5 years into the program. Can anyone offer up some pointers on where he can start looking for scholarships in American or European universities? submitted by sonatashark [link] [4 comments] ### /r/netsec #### Google Exploit -- Steal Login Email Addresses ### /r/compsci #### Snark, Chord, and Trust in Algorithms ### StackOverflow #### Scala Inheritance (Too Many Arguments For Constructor) I'm trying to extend a CSVReader that I found at https://github.com/tototoshi/scala-csv/blob/master/src/main/scala/com/github/tototoshi/csv/CSVReader.scala I wrote the following bare bones shell of a class: class CSVOtherReader(reader: Reader, format: CSVFormat) extends CSVReader(reader, format) { }  Which gives me the error that there are: too many arguments for constructor CSVReader: (reader: java.io.Reader)(implicit format: com.github.tototoshi.csv.CSVFormat)com.github.tototoshi.csv.CSVReader  This class was created directly from the Scala IDE Eclipse Plugin by creating a new class, marking it as inheriting from a superclass and directly pointing to the CSVReader class. Because of this, I feel that the syntax should be correct, but I'm wondering in what cases this might not work. Is there something about the parent class (found at the github link) that would prevent me from doing this? A quick look at inheritance in Scala makes it seem like this syntax is correct. -Arjun #### How to implement custom made Methods in Java that will be common to many classes? Need something in Java that can hold my custom made methods that are going to be common for many other classes that I'm using in my project. What needs to be done ? ### /r/compsci #### Binary String Prefix Consider the following problem: given a binary string s, determine the length of the longest prefix of s that has more 0's than 1's. I was a bit confused about the definition of prefix. Does this definition include the string s itself? I imagine so - due to s being a substring of s. But, I'm not too sure. submitted by abrarisland [link] [3 comments] ### StackOverflow #### set a reference equal to another reference in SML Is it possible to update a reference to equal another reference in SML? This may be a really silly question, but I can't seem to get my head around it. For example, consider the following two references: val x1 = ref(NONE); val x2 = ref(NONE); (*update so that x1 = x2; returns true*)  How can I update x1 to be equal to x2. I'm not talking about putting the contents that x2 points into x1, I want to know if I can set them equal so when I test for equality (eq x1 = x2) the result is true. I thought about having them point to a new memory location. val z = NONE x1 := z x2 := z  This doesn't work. I'm guessing I should review references in SML again. Thanks for the help. #### What are the benefits of Reader monad? I have read a blog post about Reader monad. The post is truly great and explains the topic in details but I did not get why I should use the Reader monad in that case. The post says: Suppose there is a function query: String => Connection => ResultSet def query(sql:String) = conn:Connection => conn.createStatement.executeQuery(sql)  We can run a few queries as follows: def doSomeQueries(conn: Connection) = { val rs1 = query("SELECT COUNT(*) FROM Foo")(conn) val rs2 = query("SELECT COUNT(*) FROM Bar")(conn) rs1.getInt(1) + rs2.getInt(1) }  So far so good, but the post suggests use the Reader monad instead: class Reader[E, A](run: E => A) { def map[B](f: A => B):Reader[E, B] = new Reader(е=> f(run(е))) def flatMap[B](f:A => Reader[E, B]): Reader[E, B] = new Reader(е => f(run(е)).run(е)) } val query(sql:String): Reader[Connection, ResultSet] = new Reader(conn => conn.createStatement.executeQuery(sql)) def doSomeQueries(conn: Connection) = for { rs1 <- query("SELECT COUNT(*) FROM Foo") rs2 <- query("SELECT COUNT(*) FROM Bar") } yield rs1.getInt(1) + rs2.getInt(1)  Ok, I got that I don't need to thread connection through the calls explicitly. So what ? Why the solution with Reader monad is better than the previous one ? UPDATE: Fixed the typo in def query: = should be => This comment only exists because SO insists that edits must be at least 6 chars long. So here we go. ### Lobsters #### 30 Python Language Features and Tricks You May Not Know About ### StackOverflow #### Build Maven Scala Project with Akka Usually, I am using SBT to build Scala Projects. Lately, I have been asked to integer an existing project, they use Maven as development tool. I wanna work with Maven to build an Akka-Scala Project. I couldn't find any tool to build that. I tried to make a new Maven project from Eclips (New Project ==> Maven Project), then Add actor library (Proprieties => Build Path => Add Akka Library from external Libraries). But it didn't work. Can you recommend any IDE/Tool to do that ? #### Customizing Slick Generator I'm using the Slick generator to generate my tabble definitions based on my database and I would like to change a thing in the generated code. When it generates the classes it does not put my auto increment keys as Option[Int] = None in the case classes... Is there a way to do that? And maybe add an autoinc method in the table definition that returns the generated id like this for example: def autoInc = id.? ~ name <> (User, User.unapply _) returning id  #### creating new Instance of scala object in java I'm trying to create a system that will load a class extending Mod when it is written in scala OR java. I have the class object and I check if the class is written in scala by checking if the class has the @ScalaMod annotation. And I handle the different language by a Language adapter I wrote. But I keep getting this error. java.lang.reflect.InvocationTargetException at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[?:1.6.0_65] at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) ~[?:1.6.0_65] at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) ~[?:1.6.0_65] at java.lang.reflect.Method.invoke(Method.java:597) ~[?:1.6.0_65] at net.minecraft.launchwrapper.Launch.launch(Launch.java:134) [launchwrapper-1.9.jar:?] at net.minecraft.launchwrapper.Launch.main(Launch.java:28) [launchwrapper-1.9.jar:?] at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[?:1.6.0_65] at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) ~[?:1.6.0_65] at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) ~[?:1.6.0_65] at java.lang.reflect.Method.invoke(Method.java:597) ~[?:1.6.0_65] at com.intellij.rt.execution.application.AppMain.main(AppMain.java:120) [idea_rt.jar:?] Caused by: java.lang.ExceptionInInitializerError at java.lang.Class.forName0(Native Method) ~[?:1.6.0_65] at java.lang.Class.forName(Class.java:249) ~[?:1.6.0_65] at net.acomputerdog.BlazeLoader.mod.language.IModLanguage$ScalaModLanguage.newInstance(IModLanguage.java:10) ~[IModLanguage$ScalaModLanguage.class:?] at net.acomputerdog.BlazeLoader.mod.ModList.load(ModList.java:62) ~[ModList.class:?] at net.acomputerdog.BlazeLoader.main.BlazeLoader.init(BlazeLoader.java:96) ~[BlazeLoader.class:?] at net.minecraft.client.Minecraft.run(Minecraft.java:781) ~[Minecraft.class:?] at net.minecraft.client.main.Main.main(Main.java:94) ~[Main.class:?] ... 11 more  this is the Language Adapter class: public interface IModLanguage { public Mod newInstance(Class<?> objectClass) throws Exception; public static class ScalaModLanguage implements IModLanguage { public Mod newInstance(Class<?> objectClass) throws Exception{ Class<?> scalaObject = Class.forName(objectClass.getName(), true, ScalaModLanguage.class.getClassLoader()); return (Mod) scalaObject.getField("MODULE$").get(null);
}
}

public static class JavaModLanguage implements IModLanguage {
public Mod newInstance(Class<?> objectClass) throws Exception {
return (Mod) objectClass.newInstance();
}
}


this is the scala Mod class:

    @ScalaMod
object XplosionCoreBL extends Mod
{
val xplosionModHandler: XplosionModHandler = new XplosionModHandler
var newModVersions: ListBuffer[NewModVersionEntry] = new ListBuffer[NewModVersionEntry]()
val xplosionConfig: ConfigHandler = new ConfigHandler(new File(ApiGeneral.configDir, "XplosionCore-BL.cfg"), this, "Everything deleted from this file will restore itself with its default value.")
val logger: XplosionLogger = new XplosionLogger("XplosionCore-BL")

override def getModId: String = "XplosionCore-BL"

override def getModName: String = "XplosionCore-BL"
override def getIntModVersion: Int = 0

override def getStringModVersion: String = "1.0.0"

override def isCompatibleWithBLVersion: Boolean = true

override def getModDescription: String = "Core for all XplosionMods made with BlazeLoader."

{
VersionCheckHandler.checkXplosionModsVersions();

}

override def start
{
}

{
}


And here is the handeling code:

Class<? extends Mod> cls = iterator.next();
Mod mod = null;
try {
mod = getModLanguage(cls).newInstance(cls);


.

public static IModLanguage getModLanguage(Class<?> modClass) throws Exception {
if (modClass.isAnnotationPresent(ScalaMod.class)) {
return new IModLanguage.ScalaModLanguage();
}
else {
return new IModLanguage.JavaModLanguage();
}
}


#### Detecting a macro annotated type within the body of a macro annotation

I want to use macro annotations (macro-paradise, Scala 2.11) to generate synthetic traits within an annotated trait's companion object. For example, given some STM abstraction:

trait Var[Tx, A] {
def apply()         (implicit tx: TX): A
def update(value: A)(implicit tx: TX): Unit
}


I want to define a macro annotation txn such that:

@txn trait Cell[A] {
val value: A
var next: Option[Cell[A]]
}


Will be re-synthesised into:

object Cell {
trait Txn[-Tx, A] {
def value: A
def next: Var[Option[Cell.Txn[Tx, A]]] // !
}
}
trait Cell[A] {
val value: A
var next: Option[Cell[A]]
}


I got as far as producing the companion object, the inner trait, and the value member. But obviously, in order for the next member to have the augmented type (instead of Option[Cell[A]], I need Option[Cell.Txn[Tx, A]]), I need to pattern match the type tree and rewrite it.

For example, say I find the next value definition in the original Cell trait like this:

case v @ ValDef(vMods, vName, tpt, rhs) =>


How can I analyse tpt recursively to rewrite any type X[...] being annotated with @txn to X.Txn[Tx, ...]? Is this even possible, given like in the above example that X is yet to be processed? Should I modify Cell to mix-in a marker trait to be detected?

So the pattern matching function could start like this:

val tpt1 = tpt match {
case tq"$ident" =>$ident  // obviously don't change this?
case ??? => ???
}


### TheoryOverflow

#### Knapsack with dependent profits (pairs of items)

I'm working on a problem which MAY be reduced to the following version of Knapsack:

Suppose two items $e_i$ and $e_j$ have profit $p_i$ and $p_j$ respectively. However, if both items are present in the knapsack, then for some $i$ and $j$, the combined profit of $e_i$ and $e_j$ is NOT $p_i+p_j$. It could be lower or higher. Note that in general, profits could be additive, but for some pairs of elements, our new rule holds, and we know in advance the value of $profit(\{e_i\} \cup \{e_j\})$ for such pairs. As always we want to maximize total profit.

So my question is, has work been done on such a variant of knapsack? Are there papers that can I read to better understand this formulation? I am not well-versed with the entire literature of Knapsack, and I tried to search for this but came up empty.

### StackOverflow

#### Clojure: how to merge several sorted vectors of vectors into common structure?

I have a few vectors of vectors. The first element of each of the sub-vectors is a numeric key. All parent vectors are sorted by these keys. For example:

[[1 a b] [3 c d] [4 f d] .... ]

[[1 aa bb] [2 cc dd] [3 ww qq] [5 f]... ]

[[3 ccc ddd] [4 fff ddd] ...]


Need to clarify that some key values in nested vectors may be missing, but sorting order guaranteed.

I need to merge all of these vectors into some unified structure by numeric keys. I also need to now, that a key was missed in original vector or vectors.

Like this:

[ [[1 a b][1 aa bb][]] [[][2 cc dd]] [[3 c d][3 ww qq][3 ccc ddd]] [[4 f d][][4 fff dd]]...]


### UnixOverflow

#### How do I make use of unused space on my boot drive on FreeBSD

I have an old FreeBSD Server (running 7.3-RELEASE) that desperately needs additional storage. In fact, it has some-- the original 20G SCSI drives have been replaced by 300G SCSI drives, so in theory there is 280G available that could be used.

I'd like to make use of this space. I think the best way to do this is by formatting the unused space as a new slice on the existing drive, but I'm not clear how to do this without destroying the data on the existing slice. Most of the documentation I can find about doing this refers to initial installation. I know how to set up slices and partitions during initial installation, but not how to claim unused space on the drive AFTER initial installation.

(I'd also be happy to expand the slice and add additional partitions to the existing slice, but I've heard that this is riskier).

I thought the easy way to do this might be to use /stand/sysinstall, but when I go into either Configure->FDisk or Configure->Label, I get this message:

No disks found!  Please verify that your disk controller is being
properly probed at boot time.  See the Hardware Guide on the
Documentation menu for clues on diagnosing this type of problem.


This is obviously untrue, since I'm actually running off of a disk when I get this message, but maybe sysinstall just doesn't like messing with the boot disk?

Output of fdisk da0:

******* Working on device /dev/da0 *******
parameters extracted from in-core disklabel are:

Figures below won't work with BIOS for partitions not in cyl 1
parameters to be used for BIOS calculations are:

Media sector size is 512
Warning: BIOS sector numbering starts with sector 1
Information from DOS bootblock is:
The data for partition 1 is:
sysid 165 (0xa5),(FreeBSD/NetBSD/386BSD)
start 63, size 35905212 (17531 Meg), flag 80 (active)
beg: cyl 0/ head 1/ sector 1;
end: cyl 1023/ head 254/ sector 63
The data for partition 2 is:
<UNUSED>
The data for partition 3 is:
<UNUSED>
The data for partition 4 is:
<UNUSED>


Output of bsdlabel da0s1

# /dev/da0s1:
8 partitions:
#        size   offset    fstype   [fsize bsize bps/cpg]
a:  2097152        0    4.2BSD     2048 16384    89
b:  2097152  2097152      swap
c: 35905212        0    unused        0     0         # "raw" part, don't edit
e:  2097152  4194304    4.2BSD     2048 16384    89
f: 29613756  6291456    4.2BSD     2048 16384    89


Update:

I came a cross the advice to use sade for this purpose. Unfortunately, sade can't see much empty space:

         0         63         62        -     12     unused        0
63   35905212   35905274    da0s1      8    freebsd      165
35905275      10501   35915775        -     12     unused        0


This may be a dead end. Do I need to figure out drive geometry somehow? It might be relevant to mention that the drive is a RAID 1 mirror set; originally the mirrored drives were both 20G SCSI drives but they've both been swapped out with 300G drives. I'm willing to temporarily break the mirror if that will help.

### StackOverflow

#### Changing scala version of play from console

I should revert my project to scala 2.10.0, but it looks like I cannot from the console. I've tried to clean and rebuild with scalaVersion := "2.10.0" in the build.sbt, but it keeps using the 2.10.2. How can I do something like "play scala-version 2.10.0" ? (which doesn't seem to be correct) I know it uses the 2.10.2 because it is using scala-compiler and reflect 2.10.2

### CompsciOverflow

#### CFG to Chomsky Normal Form conversion steps

Give an unambiguous grammar that recognizes the same language

$A \rightarrow (A) \mid (A)(A) \mid B$

$B \rightarrow BB \mid AA \mid CC$

$C \rightarrow ABA \mid \varepsilon$

I am really stumped with this question, can anyone post a solution to this with the steps?

What I have done so far:

$S_0 \rightarrow A$

$A \rightarrow (A) \mid (A)(A) \mid B$

$B \rightarrow BB \mid AA \mid CC$

$C \rightarrow ABA \mid \varepsilon$

Remove $\varepsilon$ rules

$S_0 \rightarrow A \mid \varepsilon$

$A \rightarrow (A) \mid (A)(A) \mid B \mid () \mid (A)() \mid ()() \mid ()(A)$

$B \rightarrow BB \mid AA \mid CC \mid C \mid B \mid A$

$C \rightarrow ABA \mid AA \mid A \mid B \mid AB \mid BA$

1. Is this correct so far?
2. When I remove the unit rules, do I just need to remove these: $C \rightarrow B, C \rightarrow A, B \rightarrow B, B \rightarrow A$?
3. I wouldn't remove $B \rightarrow BB$ ?
4. For $B \rightarrow B$ can't I just remove the $B$ on the RHS since it goes to itself...?

### QuantOverflow

#### Implied state price density (Question 1 - derivation of the formula)

I came upon the term "implied state price density" in a couple of papers. As far as I understand the concept one basically tries to extract the "pricing density" from the market data.

For the sake of simplicity we assume a constant interst rate $r$ and also don't make any assumptions on the model used to evolve $S_t$.

$C(t,S_t,K,r,T)=e^{-r(T-t)}\int_0^{\infty}(S_T-K)^+f(S_T|S_t)dS_T$

According to Douglas T. Breeden and Robert H. Litzenberger in their paper Prices of State-Contingent Claims Implicit in Option Prices one can recover the density via the formula:

$p(S_T|S_t)=e^{r(T-t)}\frac{\partial^2 C(t,S_t,K,r,T)}{\partial K^2}|_{K=S_T}$

How does one arrive at this formula? I tried to differentiate $C(t,S_t,K,r,T)$ but according to the rules for differentiating parameter integrals this is not how one can arrive at above formula (what am I missing?)

P.S. You can read the paper online for free at JSTOR after you register. Or just email me and I will sent you the pdf-file

### StackOverflow

#### I want to merge a Seq[Tuple2] and a Seq[String] to a Seq[Tuple3] in Scala

Here is my solution, but I don't like it very much:

var seq1: Seq[String] = Seq("apple", "banana", "camel")
var seq2: Seq[(String, String)] = Seq( "green" -> "fruit", "yellow" -> "fruit", "brown" -> "animal" )
var iter = seq1.toIterator

seq2.map {s => (s._1, s._2, iter.next()) }


### TheoryOverflow

#### What kind of computations or algorithms give rise to iterated logarithm and inverse Ackermann function?

I heard a statement saying that iterated logarithm and inverse Ackermann function are usually the slowest growing functions used in computer program complexity analysis. Is that true? What kind of computations or algorithms naturally give rise -- in the sense as does divide-and-conquer gives rise to logarithm -- to iterated logarithm and inverse Ackermann function?

#### Does every greedy algorithm have matroid structure?

Its well established that for every matroid M and any weight function w, there exits a GREEDYBASIS(M,w) which returns a maximum weight basis of M. So, is vice-versa also true? That's if, there is some greedy algorithm then it must have matroid structure also.

#### Figuring EasyVer problems - problems whose witness can be verified in time independent on the instance size

In a related question I've defined a class of graph problems which are verifiable using a time related only on the size of the witness:

$EasyVer=\{L\subset \mathcal{G}\times \mathbb{N}|$ a witness $w$ of $L$'s instance can be verified in $poly(k)$ time$\}$, i.e. independent of $|V|,|E|$.

You can assume that the verifier has random access to the graph adjacency matrix in $O(1)$ time.

Examples for $EasyVer$ problems will be $VC, Clique, IS$ and $Steiner Tree$.

Also, many packing and problems, $triangle-packing$, $k-path-packing$, or in general $H-packing$ for a graph $H$ with constant treewidth, are all in $EasyVer$.

Problems which aren't in (i.e. require more extensive interaction with the input) are also easily found: $VC, DominatingSet, FVS$(Feedback Vertex Set).

In order to get a sense of how $EasyVer$ relates to known complexity classes it would be useful to have a list of problem it contains.

Which other NP-complete problem (or even better, problem classes) can be verified in $poly(k)$ time, where $k$ is the size of the problem witness?

### QuantOverflow

#### Closed form european option prices for a variance gamma process with a randomly distributed drift, volatility, and variance rate

Does an option pricing model with a closed form European option price exist that takes into account randomly distributed drift, volatility, and variance rate?

I prefer a modification to the variance gamma model, but a modification to any other model is welcome.

### CompsciOverflow

#### Kosaraju’s Algorithm - why transpose?

In directed graph, to find strongly connected components why do we have to transpose adjacency matrix (reverses the direction of all edges) if we could use reversed list of nodes by they finishing time and then traverse original graph. In other words, we would find finish times of all vertices and start traversing from lowest finish time to greatest (by increasing finish time)?

Additionally, if we do topological sorting on some DAG, and then reverse edges (transpose adjacency matrix) and do topological sorting again - should we get to equal arrays, just in reversed order?

EDIT: Algorithm description from other topic: Correctness of Strongly Connected Components algorithm for a directed graph

### StackOverflow

#### When a keyword means different things in different contexts, is that an example of context sensitivity?

According to this answer => in Scala is a keyword which has two different meanings: 1 to denote a function type: Double => Double and 2 to create a lambda expression: (x: Double): Double => 2*x.

How does this relate to formal grammars, i.e. does this make Scala context sensitive?

I know that most languages are not context free, but I'm not sure whether the situation I'm describing has anything to do with that.

Edit:

Seems like I don't understand context sensitive grammars well enough. I know how the production rules are supposed to look, and what they mean ("this production applies only if A is surrounded by these symbols"), but I'm just not sure how they relate to actual (programming) languages.

I think my confusion stems from reading something like "Chomsky introduced this term because a word's meaning can depend on its context", and I connected => with the term "word" in the quote, and those two uses of it being two separate contexts.

### QuantOverflow

#### Measuring historical earnings surprises, their frequency and severity

This is my first post to Quantitative Finance, so I hope my question is formatted the right way.

I am starting to research the effects of earnings surprises on certain equity indices. Is there a source, such as an academic paper or database, for:

Companies that have had a high incidence of positive and/or negative earnings surprises over the last 5 to 10 years? Statistics on the frequency and severity of earnings surprises on the day of and a few days after the event? Statistics about the medium-term performance (the following quarter, for example) of stocks with large negative or positive surprises? Industries with more earning surprises than others?

### StackOverflow

#### Scalate 1.7.0 - TemplateEngine

I have this error when trying to use this https://gist.github.com/nise-nabe/5024801 with play, to load haml templates with Scalate.

bad symbolic reference. A signature in TemplateEngine.class refers to term util in package org.fusesource.scalate which is not available. It may be completely missing from the current classpath, or the version on the classpath might be incompatible with the version used when compiling TemplateEngine.class.

What can it be? I don't know where to look from this error, I don't understand it very well.

### StackOverflow

#### Declare variable in a Play2 scala template

How do you declare and initialize a variable to be used locally in a Play2 Scala template?

I have this:

@var title : String = "Home"


declared at the top of the template but it gives me and error saying:

illegal start of simple expression """),_display_(Seq[Any](/*3.2*/var)),format.raw/*3.5*/(""" title : String = "Home"


### QuantOverflow

#### Fitting Egarch Model

I am performing a monte-carlo simulation in MATLAB for the first order EGARCH model in which case I am simulating 100 paths of size 500 assuming Gaussian and Student's-t distributions for the innovations. I am getting problems in writing the matlab code for fitting an Egarch(1,1) to the simulated data. I'd be very grateful if someone helped me with the required algorithm {for getting the mean of the parameters over the number of paths (replications=100 in this case)}. Thanks in advance -Tim

### StackOverflow

#### Clojure closure efficiency?

Quite often, I swap! an atom value using an anonymous function that uses one or more external values in calculating the new value. There are two ways to do this, one with what I understand is a closure and one not, and my question is which is the better / more efficient way to do it?

Here's a simple made-up example -- adding a variable numeric value to an atom -- showing both approaches:

(def my-atom (atom 0))

(swap! my-atom
(fn [curr-val]
;; we pull 'n' from outside the scope of the function
;; asking the compiler to do some magic to make this work
(+ curr-val n)) ))

(swap! my-atom
;; we bring 'n' into the scope of the function as the second function parameter
;; so no closure is needed
n))


This is a made-up example, and of course, you wouldn't actually write this code to solve this specific problem, because:

(swap! my-atom + n)


does the same thing without any need for an additional function.

But in more complicated cases you do need a function, and then the question arises. For me, the two ways of solving the problem are of about equal complexity from a coding perspective. If that's the case, which should I prefer? My working assumption is that the non-closure method is the better one (because it's simpler for the compiler to implement).

There's a third way to solve the problem, which is not to use an anonymous function. If you use a separate named function, then you can't use a closure and the question doesn't arise. But inlining an anonymous function often makes for more readable code, and I'd like to leave that pattern in my toolkit.

Thanks!

edit in response to A. Webb's answer below (this was too long to put into a comment):

My use of the word "efficiency" in the question was misleading. Better words might have been "elegance" or "simplicity."

One of the things that I like about Clojure is that while you can write code to execute any particular algorithm faster in other languages, if you write idiomatic Clojure code it's going to be decently fast, and it's going to be simple, elegant, and maintainable. As the problems you're trying to solve get more complex, the simplicity, elegance and maintainability get more and more important. IMO, Clojure is the most "efficient" tool in this sense for solving a whole range of complex problems.

My question was really -- given that there are two ways that I can solve this problem, what's the more idiomatic and Clojure-esque way of doing it? For me when I ask that question, how 'fast' the two approaches are is one consideration. It's not the most important one, but I still think it's a legitimate consideration if this is a common pattern and the different approaches are a wash from other perspectives. I take A. Webb's answer below to be, "Whoa! Pull back from the weeds! The compiler will handle either approach just fine, and the relative efficiency of each approach is anyway unknowable without getting deeper into the weeds of target platforms and the like. So take your hint from the name of the language and when it makes sense to do so, use closures."

### /r/emacs

#### Is there a function to reverse the effect of fill-paragraph?

I like working on my text on Emacs but I'd like to unfill-paragraph before I post to certain services.

submitted by sstewartgallus

### What we do

Weft tracks shipping containers using low-cost hardware to make sure that shipments get to where they're supposed to be on time and intact, saving billions in lost value due to cargo shrink and disrupted supply chains.

We take the info we get from the hardware and figure out where the bottlenecks in the supply chain are, predict whether or not a shipment is going to make it to its destination on time, and dynamically reroute/reschedule shipments so that we can optimize the system as a whole (plus we're also doing neat stuff with freight brokerage and automated freight forwarding).

### How we do it

Web stack -> clojure (immutant) -- we use middleman + enlive (and a bit of hiccup) for templating. In the process of a rearch, so we're changing how the system is set up now.

Algorithms -> a dizzying mixture of oldschool and newschool techniques ;-)

Hardware -> think cell phone on crack (atmel avr xmega, a bunch of sensors, gps, gsm, etc). Working integrated chip now! Have some pilots running with v1 hw.

### And the rest

We've got some very interesting partners and customers (ranging from telcos to enterprise software providers to regional and international logistics companies). We also have some top tier investors!

Looking for help at every point in the system (hardware, firmware, frontend, backend, algorithms, mobile, etc) but we're mostly focused on the data science aspect of Weft for the near term.

Get information on how to apply for this position.

### Wes Felter

#### The Colourist - Little Games

The Colourist - Little Games

#### "You know North Americans don’t like zombies - just watch their constant anti-zombie..."

“You know North Americans don’t like zombies - just watch their constant anti-zombie propaganda, it’s disgusting. And all for a bunch of folks whose only fault is a high appreciation one’s brain.”

- smsm42

### TheoryOverflow

#### Extending the definition of network surprise to weighted graphs

Recent research in graph clustering (also called community detection in other contexts) has shown that a definition beyond the traditional modularity (introduced by Newman, 2004) can be useful to evaluate the quality of a network partition.

This measure has been called surprise since it evaluates how surprising (unlikely) is, from a statistical point of view, a given partition (Aldecoa, 2013).

Since the only precise definition has been introduced by the very recent article "Graph Clustering with Surprise: Complexity and Exact Solutions", T.Fleck, A.Kappes and D.Wagner (2014), I will use it as it is:

Let $\xi$ be a clustering of a graph $G = (V,E)$ with $i_e$ intracluster edges. Among all graphs labeled with vertex set $V$ and exactly $m$ edges, we draw a graph $G$ uniformly at random. The surprise $S(\xi)$ of this clustering is then the probability that $G$ has at least $i_e$ intracluster edges with respect to $\xi$. The lower this probability, the more surprising it is to observe that many intracluster edges within $G$, and hence, the better the clustering. The above process corresponds to an urn model with $i_p(\xi)$ white and $p − i_p(\xi)$ black balls from which we draw m balls without replacement. The probability to draw at least $i_e$ white balls then follows a hypergeometric distribution, which leads to the following definition: $$S(\xi) := \sum \limits_{i=i_e}^m \dfrac{\binom{i_p}{i} \binom{p-i_p}{m-i}} {\binom{p}{m}}$$ the lower $S(\xi)$, the better the clustering. Some authors take the $-\log_{10}(S)$ to work with treatable numbers and avoid underflow errors on computers, so for them, the higher $-\log_{10}(S)$ the better the clustering.

This definition of surprise is very interesting to me because it overcomes many problems, but still there isn't any method to maximize it explicitly. Some meta-heuristics has been developed, despite on trees an $O(n^5)$ algorithm to minimize it has been found. Also global maximization methods or integer linear programming are a viable options.

Anyway, despite my rumblings on the optimization of surprise, the main problem is how to extend this definition to weighted graphs. My main problem in extending this definition to weighted graphs is if I have to consider the continuos analogue of the hypergeometric distribution, since the summation from $i=i_e$ to $m$ must be changed to a more generic $i_{ew}\in \mathbb{R}$ that represents the sum of all weights of intracluster edges and the sam for $m_w$ that is no more the number of edges of the graph but the sum of all edge weights of the graph.

I'm having some problem to invent a viable definition for graph surprise that does not force me to switch to the Gauss-Hermite polynomial chaos (the continuos analogue of hypergeometric distribution).

So I think this question should be also addressed in the list of "Problems that are easy on unweighted graphs, but hard for weighted graphs" :P

### StackOverflow

#### convert List[Tuple2[A,B]] to Tuple2[Seq[A],Seq[B]]

Stuck here, trying to convert a List of case class tuples to a tuple of sequences and multi-assign the result.

val items = repo.foo.list // gives me a List[(A,B)]


I can pull off multi-assignment like so:

val(a,b) = (items.map(_._1).toSeq, items.map(_._2).toSeq)


but it would be nicer to do in 1 step, along the lines of:

val(a,b) = repo.foo.list.map{case(a,b) => (a,b)}


### StackOverflow

#### Scalate TemplateException with Play 2.2.1

I'm trying to use Scalate 1.6.1 with Play, but it gives me this:

[TemplateException: scala.reflect.internal.TreeInfo.firstArgument(Lscala/reflect/internal/Trees$Tree;)Lscala/reflect/internal/Trees$Tree;]


in this code in the ScalaIntegration.scala

def render(args: (Symbol, Any)*) = {
ScalateContent{
scalateEngine.layout(name, args.map {
case (k, v) => k.name -> v
} toMap)
}
}


I know it can be a version problem, so I'm currently trying to run it with scala 2.10.0, but I would like then to change to 2.10.2 .

### TheoryOverflow

#### PCP characterization of NP

The PCP theorem (NP= PCP(log n, O(1)) )is a major result in complexity theory with many applications such as proving hardness of approximate results. However, it seems to me that it does not offer any insight that leads to separating P from NP or NP from coNP. My intuition is that P=NP would imply that coNP = PCP( log n, O(1)). That means Tautology instance has a proof that can be verified by an efficient probabilistic verifier using logarithmic random bits and reading only constant number of bits from a proposed proof. It seems that PCP theorem can not shed a light on why Tautology can not have such proof system.

Why is the PCP characterization of NP not helpful in separating NP from coNP ( or from P)? Is there any known barrier?

** EDIT**: Provided context and motivation.

### StackOverflow

#### Modelling / documenting functional programs

I've found UML useful for documenting various aspects of OO systems, particularly class diagrams for overall architecture and sequence diagrams to illustrate particular routines. I'd like to do the same kind of thing for my clojure applications. I'm not currently interested in Model Driven Development, simply on communicating how applications work.

Is UML a common / reasonable approach to modelling functional programming? Is there a better alternative to UML for FP?

### CompsciOverflow

#### Decision Tree with Unbalanced Data

I have a data set with two classes: one class has at most 2000 members while the size of the second class is unlimited, though it is typically in the hundreds of thousands. I have read that it is problematic to use a decision tree to naively classify this data. My question is, how how I modify the data or the classification scheme to classify such data, using a decision tree at some point?

#### Improving SVM Performance [on hold]

I'm using the PyML implementation of an SVM to classify the MNIST database, but am unable to reproduce the results presented on Yann Lecun's website touted of SVM's. Why is this? How can I improve the performance of an SVM in classifying MNIST?

#### PDA structure for the language $w_1 \ne w^R \mid w, w_1 \in \{a,b\}^*$

Consider the language: $L = \{ww_1 : w, w_1 \in \{a,b\}^* w^R \ne w_1\}$

The task is to find a PDA to represent the language. The book I'm using is Formal Languages and Automata by Linz, and the closest problems I've worked to this are the languages:

$L = \{wcw_1 : w \in \{a,b\}^* w \ne w_1^R \}$ and $L = \{wcw_1 : w \in \{a,b\}^* w = w_1^R \}$

For those languages containing the $c$, the trick was to push everything to the stack and as soon as you see a $c$, transition to another state that begins popping everything off of the stack and transitioning as appropriate.

Here I don't have a pivot to rely on, and I'm really not completely sure what strings actually are in the language. Consider $a$ and $b$...

## Edit:

So I've created this PDA and, somehow, it seems to work. Can someone check my work?

“Use urandom.”

(Related.)

### TheoryOverflow

#### Is there a simpler proof of Beigel and Tarui's transformaion of ACC0 circuits

Beigel and Tarui's transformation of $\mathsf{ACC}^0$ circuits to depth 2 circuits with a polylog symmetric function on top is one of important results in the circuit complexity. For example, the recent breakthrough separation between $\mathsf{NEXP}$ and $\mathsf{ACC}^0$ by Ryan Williams uses this transformation to design a fast $\mathsf{ACC}^0$-CircuitSAT algorithm.

I think the proof in their paper is a little complicated and too technical for me. Is there a simpler proof of their result?

### Fefe

#### Oh na das ist ja schade. Der neue Premierminister ...

Oh na das ist ja schade. Der neue Premierminister von der Ukraine betreibt eine Stiftung namens "OpenUkraine - Arseniy Yatsenyuk Foundation". Deren Webseite ist openukraine.org. Die Partnerliste davon ist so großartig, dass ich das erst für üble Nachrede oder Satire hielt. Die war so großartig, dass wenn mir jemand Geld gegeben hätte, um eine möglichst rufschädigende Sponsorenliste für die Stiftung des neuen Premierministers der Ukraine zu machen, so toll hätte ich das nicht hingekriegt. Leider ist die Webseite seit gestern plötzlich futsch.

Es gibt glücklicherweise einen Screenshot davon. Da sind natürlich die Links verlorengegangen, weil das halt ein Screenshot ist. Aber auch ohne Links sieht man da an bekannten Gesichtern: Black Sea Trust for Regional Cooperation ("a project of the German Marshall Fund"), Chatham House, die International Renaissance Foundation (ein Projekt von George Soros), die NATO (nein, wirklich!), das US-Außenministerium, NED (Wikipedia erwähnt leider nicht, dass das landläufig als CIA-Frontorganisation gilt; wenn ich bei google auto-complete an habe und "national endowment for democracy" eingeben, wird mir "cia front" als beste Vervollständigung angeboten), Horizon Capital (eine Heuschrecke), und Swedbank (der Link ist lustigerweise auch halbtot). Und das sind nur die mit lateinischer Schrift. Das kyrillische oben links ist der Pinchuk Fund, benannt nach diesem Turbokapitalisten / Oligarchen. Das mit dem Adlerwappen ist die polnische Botschaft in Kiev, also auch eher anti-russisch eingestellt. Das kyrillische unten links hatte keinen Link, das scheint ein örtlicher Möbelhersteller zu sein.

Das Gesamtbild lässt in meinen Augen nicht viele Fragen offen.

Der Hinweis auf diese Stiftung kam von diesem Artikel von Volker Bräutigam, vielen Dank dafür an der Stelle.

Update: Bei archive.org gibt es auch ein relativ frisches Backup der Homepage, allerdings fehlt da genau die Partner-Liste, um die es hier geht.

### CompsciOverflow

#### Proof that probability that hashing with open addressing needs more than $k$ attempts is $2^{-k}$ at most

There are $n$ elements in a hash table of size $m \geq 2n$ which uses open addressing to avoid collisions.

The hash function was chosen randomly among a set of uniform functions. A set $H$ of hash-functions $h:U\to\{0,\dots,m-1\}$ is called uniform, if for every tuple of different keys $x,y \in U$ the number of hash-functions $h \in H$ with $h(x) = h(y)$ is $\frac{|H|}{m}$ at most.

Show that the propability that for $i = 1, 2, \dots,n$ the $i$-th insert operation needs more than $k$ attempts, is $2^{-k}$ at most.

This is an assignment, which I got as homework. What I already worked out:

The propability $p_1$ for a collision is 0 of course for an empty table.

The propability $p_i$ for a collision after k attempts should be $\frac{i - 1}{2n}\cdot k$ assuming that the table is filled with $i-1$ elements to this point and the tables size is $2n$ as worst case.

So I have $$p_i= \frac{i-1}{2n} \cdot k \leq 2^{-k},$$

but I don't know where to go from here.

The method of open hashing used here simply iterates over different hash-functions until a free place is found (for example $h(x) = (x \bmod j) \bmod n$ with increasing prime numbers for $j$.

#### new way to create dynamic page using php and mysql compairing conventional approach

i have used a new way of creating php file. hopefully its a new way of creating php pages. for detail please see the code(click on see the system approach)and let me know. http://worldsenex.in/

My question is what i tried to follow is that ok or it will create problem ?

### Less Wrong

#### Strategic choice of identity

Identity is mostly discussed on LW in a cautionary manner: keep your identity small, be aware of the identities you are attached to. As benlandautaylor points out, identities are very powerful, and while being rightfully cautious about them, we can also cultivate them deliberately to help us achieve our goals.

Some helpful identities that I have that seem generally applicable:

• growth mindset
• low-hanging fruit picker
• truth-seeker
• jack-of-all trades (someone who is good at a variety of skills)
• someone who tries new things
• universal curiosity
• mirror (someone who learns other people's skills)

Out of the above, the most useful is probably growth mindset, since it's effectively a meta-identity that allows the other parts of my identity to be fluid. The low-hanging fruit identity helps me be on the lookout for easy optimizations. The universal curiosity identity motivates me to try to understand various systems and fields of knowledge, besides the domains I'm already familiar with. It helps to give these playful or creative names, for example, "champion of low-hanging fruit". Some of these work well together, for example the "trying new things" identity contributes to the "jack of all trades" identity.

It's also important to identify unhelpful identities that get in your way. Negative identities can be vague like "lazy person" or specific like "someone who can't finish a project". With identities, just like with habits, the easiest way to reduce or eliminate a bad one seems to be to install a new one that is incompatible with it. For example, if you have a "shy person" identity, then going to parties or starting conversations with strangers can generate counterexamples for that identity, and help to displace it with a new one of "sociable person". Costly signaling can be used to achieve this - for example, joining a public speaking club. The old identity will not necessarily go away entirely, but the competing identity will create cognitive dissonance, which it can be useful to deliberately focus on. More specific identities require more specific counterexamples. Since the original negative identity makes it difficult to perform the actions that generate counterexamples, there needs to be some form of success spiral that starts with small steps.

Some examples of unhelpful identities I've had in the past were "person who doesn't waste things" and "person with poor intuition". The aversion to wasting money and material things predictably led to wasting time and attention instead. I found it useful to try "thinking like a trader" to counteract this "stingy person" identity, and get comfortable with the idea of trading money for time. Now I no longer obsess about recycling or buy the cheapest version of everything. Underconfidence in my intuition was likely responsible for my tendency to miss the forest for the trees when studying math or statistics, where I focused on details and missed the big picture ideas that are essential to actual understanding. My main objection to intuitions was that they feel imprecise, and I am trying to develop an identity of an "intuition wizard" who can manipulate concepts from a distance without zooming in. That is a cooler name than "someone who thinks about things without really understanding them", and brings to mind some people I know who have amazing intuition for math, which should help the identity stick.

There can also be ambiguously useful identities, for example I have a "tough person" identity, which motivates me to challenge myself and expand my comfort zone, but also increases self-criticism and self-neglect. Given the mixed effects, I'm not yet sure what to do about this one - maybe I can come up with an identity that only has the positive effects.

Which identities hold you back, and which ones propel you forward? If you managed to diminish negative identities, how did you do it and how far did you get?

#### New LW Meetup: Auckland

New meetups (or meetups with a hiatus of more than a year) are happening in:

Irregularly scheduled Less Wrong meetups are taking place in:

The remaining meetups take place in cities with regular scheduling, but involve a change in time or location, special meeting content, or simply a helpful reminder about the meetup:

Locations with regularly scheduled meetups: Austin, Berkeley, Berlin, Brussels, Cambridge, MA, Cambridge UK, Columbus, London, Madison WI, Melbourne, Mountain View, New York, Philadelphia, Research Triangle NC, Salt Lake City, Seattle, Toronto, Vienna, Washington DC, Waterloo, and West Los Angeles. There's also a 24/7 online study hall for coworking LWers.

If you'd like to talk with other LW-ers face to face, and there is no meetup in your area, consider starting your own meetup; it's easy (more resources here). Check one out, stretch your rationality skills, build community, and have fun!

In addition to the handy sidebar of upcoming meetups, a meetup overview is posted on the front page every Friday. These are an attempt to collect information on all the meetups happening in upcoming weeks. The best way to get your meetup featured is still to use the Add New Meetup feature, but you'll also have the benefit of having your meetup mentioned in a weekly overview. These overview posts are moved to the discussion section when the new post goes up.

Please note that for your meetup to appear in the weekly meetups feature, you need to post your meetup before the Friday before your meetup!

If you missed the deadline and wish to have your meetup featured, you can reach me on gmail at frank dot c dot adamek.

If you check Less Wrong irregularly, consider subscribing to one or more city-specific mailing list in order to be notified when an irregular meetup is happening: Atlanta, Chicago, Cincinnati, Cleveland, Frankfurt, Helsinki, Marin CA, Ottawa, Pittsburgh, Portland, Southern California (Los Angeles/Orange County area)St. Louis, Vancouver.

Whether or not there's currently a meetup in your area, you can sign up to be notified automatically of any future meetups. And if you're not interested in notifications you can still enter your approximate location, which will let meetup-starting heroes know that there's an interested LW population in their city!

If your meetup has a mailing list that you'd like mentioned here, or has become regular and isn't listed as such, let me know!

Want to help out the common good? If one of the meetups listed as regular has become inactive, let me know so we can present more accurate information to newcomers.

### StackOverflow

#### Scala SQL DSL (Internal/External)

I have been looking into scala primarily on how to build DSL similar to C# LINQ/SQL. Having worked with C# LINQ Query provider, it was easy to introduce our own custom query provider which translated LINQ query to our own proprietary data store scripts. I am looking something similar in scala for eg.

 val query = select Min(Close), Max(Close)
from   StockPrices
where  open > 0


First of all is this even possible to achieve in scala using internal DSL.

Any thoughts/ideas in this regard is highly appreciated.

I am still new in scala space, but started looking into Scala MetaProgramming & Slick. My complaint with Slick is i want to align my DSL close to SQL query - similar to above syntax.

#### In-line item editing in Lift / handling 2 different form submit needs on one page

OK, so first off, let's start with me acknowledging that the bind( ... ) way of binding Lift forms is so last week! :) I do know that, and I just haven't gone back to update this code yet. Also, I trust now that there's some really slick Lifty way to do this. That's what I seek. I'm stumped as to even how to hack something together. That said...

I have a list of Items that I initially display non-editable, and the title of each Item is an ajax-enabled link that calls to the server and replaces that line-item with an editable form of the Item (I use SetHtml to swap the form in at the < li> that listed that Item). "parent" Items List view looks something like this

< form data-lift="form.ajax">
< div data-lift="creategamewizard?multipart=true" id="wizardform">
< ul>
< li>Item 1< /li>
< li>Item 2< /li>
< /ul>
some more form elements
< button>Submit< /button>
< input type="hidden" id='298356928734' />
< /div>
< form>

This ajax submit (via the hidden field) calls processSubmit().

The SetHtml that swaps in the editableItem form looks something like this.
NOTE: At the end of the following listing, the "save" binding has no server-side code tied to it because the "parent" submit button is already on the page, and when I put another hidden field in this binding or tried to tie any code directly to the Edit Item Save button, that code and the "parent" submit got triggered. So the approach below was to try to use the "parent" submit for both the parent submit as well as the Edit Item submit.

<a href="javascript://" onclick={ajaxOnClickHandler(editItemClickHandler(item.id.get))}>{item.title.get}</a>

def ajaxOnClickHandler(jsHandler: ()=>JsCmd) =
{
SHtml.onEvent( e => jsHandler()).toJsCmd+";return false;"
}
def editItemClickHandler(itemId: String): ()=>JsCmd = ()=>
{
trace.logAjaxComm("ExistingItem.Edit()")
JsCmds.SetHtml("LiId"+itemId, getEditableItem(promo) )
}
def getEditableItem(itemId) =
{
bind( ...
"promotitle" -> SHtml.text(editablePromo.get.promotitle.is,
(s:String) => {
trace.logUIDataManipulation("Saving new promo Title["+s+"]");
editablePromo.get.promotitle(s)
}, "id" -> "promotitle"),
"save" -> SHtml.button("Save", ()=> {})
)
}


Then when the user selects an Item, and the editable Item form is plugged in, there's "another" submit button that should ajax submit the form data for that item, and then swap back in the (now updated) nonEditable version of the data.

The problem for me is the submission. In addition to the Edit Item form above, I've got a ajaxified submit button on the "parent" non-editable list page to handle submitting some fields below the list. The Edit Item "save"-> binding adds a button, which should do (and in fact does) nothing for itself, but it does trigger the "Parent" submit button. And I route that submit to do the save of the Edit Item form.

The non-editable Item and the editable item code swaps fine, but changes made in the editable Item form is not saved, and I figured out that that was happening because the elements in the editable Item form are not being submitted at all, following is an example of a log message I don't see at all...

bind( ... "promotitle" -> SHtml.text(editablePromo.get.promotitle.is,
(s:String) => {
trace.logUIDataManipulation("Saving new promo Title["+s+"]");
editablePromo.get.promotitle(s)
}, "id" -> "promotitle")
)


In a normal ajaxified form, all element handlers are called (if there are changes to the field, I guess...) in order of rendering, with the submit/hidden elements' handlers being called last (if they're last in the bind list.

so finally, let's get around to my question: if you're doing in-place editing like this, how do I manage 2 submit buttons (the one for the non-editable list page plus the additional one that gets added when editing an item)? I'm sure I don't need to refresh the page, but I can't figure out how you'd do this with Ajax. Maybe alternatively, the in-place editable form can be submitted as a non-submit ajax action, ie. somehow that doesn't trigger the parent submit?

### CompsciOverflow

#### Is this a good approach for first attempt to create my own language and compiler?

I want to try to create my own very simple programming language and it's compiler or interpreter.

The programming language I use is Java.

My idea was to create a compiler which will compile source code of my created language to some kind of intermediate code (nothing binary or complex, just plain text), and then have another program (a kind of virtual machine or interpreter) which will execute this intermediate code in Java.

For example, the 'programmer' types: write "hi world". The compiler turns it to some intermediate code like w-hi world. The VM/interpreter program reads this intermediate code and executes System.out.println("hi world");

My question is:

Is this approach common with programmers who want to try to create a simple language for the first time? Is this a good way to start?

I searched this site and came across questions on this subject, but the answers were more complex and technical than what I'm looking for.

### CompsciOverflow

#### How to maximize the number of buyers in a shop?

There is a shop which consists of N items and there are M buyers. Each buyer wants to buy a specific set of items. However, the cost of all transactions is same irrespective of the number of items sold. So, the shopkeeper needs to maximize the number of buyers. The buyers will buy only if all the items are being sold. Items are unique. All items need not be sold.

So, basically, we have a bipartite graph. We need to find a set of edges which maximize the number of nodes on Buyer vertex set such that each node on item set has only one edge. Any suggestions?

### QuantOverflow

#### Who is the issuer and the counter part of this instrument?

I have the following SWAP contract : T1UH4 which is a 2-Year Deliverable Interest Rate Swap.

Who is the issuer and the counter part of this SWAP in case I trade it ? Is it by default the CME Group ?

Thank you

### StackOverflow

#### Building OpenCV and Vision DLLs to use with Clojure on Windows 7

I wanted to play around with Vision, a Clojure binding for OpenCV. It seems to be based on calling methods from its own native library via JNA. Fair enough. Binaries for that library, however, are not provided. And, in my case, that's 64-bit Win7 DLLs.

It seems that OpenCV only ships with vc10, vc11 and vc12 binaries for Windows, and I don't use the Microsoft toolchain. In order to be able to compile libvision.dll, I had to rebuild OpenCV with mingw. I had the same error as this guy, and the accepted answer worked for me; otherwise, that went fine. Even though I did attempt to have an intelligent guess at enabling some extra options in CMake that I thought would be useful. (...So yeah, that's where I might have messed things up, I'm not very intelligent.)

However, I still can't get my Leiningen-based Clojure project to detect the DLL. I have a checkout of the vision repo in checkouts/, and libvision.dll and libvision.dll.a in resources/lib/ as well as checkouts/vision/resources/lib, yet still I keep getting:

#<CompilerException java.lang.UnsatisfiedLinkError: Unable to load
library 'vision': , compiling:(core.clj:6:14)>


Where do I go from here? I'm normally a Python guy and I know bugger all about Java and JNA; I don't think I've ever had to compile a DLL before, either. (I seriously feel as if I've laid an egg right now.) Is there something wrong with the DLL, or with my Leiningen project configuration?

### TheoryOverflow

#### Should O(1) necessarily stand for a non-zero constant? [on hold]

I had a debate with my friend. He argued that $o(1)\subseteq O(1)$, so if a function converges to 0, then it belongs to both $o(1)$ and $O(1)$. However I imagine that $O(1)$ represents a constant time, in essence, a non-zero constant time. Is there a broad acceptance that a function converging to zero belongs to $o(1)$ and not to $O(1)$?

### StackOverflow

I want to install nodejs on my PCBSD 10 system. I have downloaded the src file of latest nodejs. On terminal I ran the command ./configure It runs fine. After that I tried make but its asking on console

I need GNU make. Please run 'gmake' instead.


Then I tried gmake the terminal says

CORRECT>gmake (y | n | e | a)?


I pressed y then again it says "I need GNU make. Please run gmake instead"

How to install nodejs???

### CompsciOverflow

#### Inorder Successor in BST in O(1)

Can I have successor() and predecessor() in O(1) if I keep pointers to successor and predecessor for each node. After adding a new node I can find its successor and predecessor in O(log(n)) time and update its as well as their predecessor and successor pointers.All this will keep insert/delete to O(log(n)) as before. Will this work?

### StackOverflow

#### JavaScript Routing + Scala+ play framework

I'm going to access Data that is in scala by javascript routing code that palace in views.index.scala.html.

Application.scala

object Application extends Controller {
def index = Action {
Ok(views.html.index("JavaScript Routing"))
}

def isEmailExist(email: String) = Action { implicit request =>
Usr.findUserByEmail(email) match {
case true => Ok("true")
case false => Ok("false")
}
}


conf/Routes

GET     /                                 controllers.Application.index
GET    /isEmailExist/:email               controllers.Application.isEmailExist(email:String)


views.index.scala.html

@(message:String)

<html>
<body>
<p>
Boolean: <span id="boo"></span><br>
</p>
<script>
document.getElementById("boo").innerHTML=controllers.Application.isEmailExist("true");
</script>
</body>
</html>


By use of url="http://localhost:9000/", Ican't get javascript demands at browser.

### TheoryOverflow

#### Is this variant of PAC learning known?

Here is a problem I've never seen, in a model similar to the PAC model. It asks a similar question to PAC learning, but wishes to optimize, rather than learn. I wonder if this problem is known, has any name, or has ever been solved.

Input: A Random Oracle to function $f:[0,1]^n \rightarrow [0,1]$ from a concept class $C$. Additionally, $n$ points $x_1,\ldots,x_n$ in the domain.

Goal: Select an $i$ such that $E[f(x_i)]$ is as large as possible. We assume the function $f$ is randomly distributed amongst all functions in the class $C$ that agree with all the samples we have drawn.

One way to solve this problem is to draw many samples from $f$, create a hypothesis $h$, and choose the $x_i$ whose $h(x_i)$ is largest. There are two problems with this:

• $h$ might not be representative of the set of all $f$'s that agree with the samples we've drawn.
• To solve our problem it is not clearly necessary to learn $f$: for some classes $C$ might be a more efficient way to select a good $i$ without trying to learn $f$.

An example setting is where $C$ is the set of linear classifiers. In that case it should probably be quite easy to solve the problem. But what about other, more complicated classes?

The real version I'm interested in is that of agnostic learning of linear functions? we assume the function $f$ is somewhat correlated with a linear function, and the goal is to choose an i which maximizes $E[f(x_i)]$ to the best of our ability.

### QuantOverflow

#### Pls help me to calculate sharpe [on hold]

I've got a 15 year back-test with fund value at the end of each day. On some days there were no trades and hence no change. How exactly do I calculate the Sharpe ratio? I am doing it in excel. Please, a step-by step would be most appreciated. Thank you. Also what's the risk-free rate in Canada?

### StackOverflow

#### Is there any use for kinds of kinds?

In Haskell, kinds (types of types) allow for some useful things such as type constructors. My question is, would there be any benefit at all to also having kinds of kinds (types of types of types), or is there nothing they could do that couldn't easily be done with just kinds and types?

#### How to tell SBT to resolve managed artifacts

Is there a command in the SBT console that forces it to resolve artifacts (especially, re-resolve SNAPSHOT dependencies)? The only way I know of now is to run clean and then compile (or start), but this takes much longer and isn't always necessary.

#### Find type class instances for Shapeless HList

Say that I have a trait Show[T] such as the one in Scalaz: https://github.com/scalaz/scalaz/blob/scalaz-seven/core/src/main/scala/scalaz/Show.scala#L9

I also have a Shapeless HList that may look like "1" :: 2 :: 3L :: HNil.

Is there a way to find the Show instance for each element and apply shows such that I end up with "1" :: "2" :: "3L" :: Hnil?

If any element were of a type that did not have an implicit Show instance in scope I would want a compile error.

I think that if I build up an HList of the Show instances I should be able to use zipApply to get the HList I want, but I don't know if there is a way to get have Scala infer the HList of Show instances instead of me building it up by hand.

### CompsciOverflow

#### Variants of the 3-Partition problem

The 3-Partition problem (wiki) is a $\text{NP}$-complete problem which is to decide whether a given multiset of integers can be partitioned into triples that all have the same sum. It is well-known that the 3SAT problem has a plenty of variants. Are there some variants of the 3-Partition problem discussed in the literature?

### StackOverflow

#### Scala: Extract types from generic parameters

I have a class like this:

abstract class Foo[I, T, A <: Bar[I, T]](x: SomeClass[A]){


When I want to inherit class Foo, I've to specify types T and I, which could be extracted from type parameters of type A. (I.e. there are enough data to extract these types.) Does the Scala compiler allow to extract them somehow? I'd like to write something like:

abstract class Foo[A <: Bar[_, _]](x: SomeClass[A]){
type Bar[I, T] = A    // <-- something like pattern matching


It is strange that I can write that, but the type Bar[I, T] = A line does not seem to declare anything. The line passes, but I can use neither type I nor type T.

Can I do something similar?

I know I could use abstract class Foo[I, T](x: SomeClass[A]){ and then define type A = Bar[I, T], but it loses some universality. Additionaly, this case means more (boilerplate) code for the code users, because they are likely to define a shortcut (i.e. type alias) for Bar[I, T].

I can rewrite the abstract class Foo to a trait and I probably will do so. But I am not sure if it could help.

### CompsciOverflow

#### QuickSort Dijksha 3-Way Partitioning: why the extra swapping?

Given the algorithm here, look at the scenario where i is at "X", the following happens:

Scenario: i -> "X", "X" > "P"

1. swap("X", "Z"), gt--;   // the value at i is now "Z", which is still > "P"
2. swap("Z", "Y"), gt--;   // the value at i is now "Y", which is still > "P"
3. swap("Y", "C"), gt--;    // Now we finally get a value at i "C" which is < "P"
// Now we can swap values at i and lt, and increrement them
4. swap("P", "C"), i++, lt++;


Why don't we just decrement gt until gt points to a value that is < the value at lt ("P" in this case), then we swap this value with the value at i. This will save swapping operations.

So if we do that for the scenario mentioned above, we'll do:

1. gt--
2. gt--
3. swap("X", "C"), gt--;
// Now we can swap values at i and lt, and increrement them
4. swap("P", "C"), i++, lt++;


Is this excessive swapping needed for the algorithm? does it improve performance in some way?

### Fefe

#### Wow, sogar CNN ist ihre Berichterstattung peinlich ...

Wow, sogar CNN ist ihre Berichterstattung peinlich und sie lassen mal jemanden als Experten sprechen, der sich tatsächlich die Lage angeschaut hat.

### CompsciOverflow

#### Reduction examples from the strongly NPC problem 3-PARTITION

3-PARTITION is strongly NP-complete, i.e. it remains NP-complete even if the input is given in unary.

I'm searching two or three examples of (possibly well-known) non-numeric problems that have been proved to be NP-complete using a reduction from 3-PARTITION (and the reduction obviously relies on the strongly np-completeness). I would like the references to the original papers.

#### Type systems understanding problems

I'm not sure if this is the correct place to ask this kind of a question, but here goes:

I'm doing my own reading of the Principles of Program Analysis book, and i'm having trouble understanding some priciples from Chapter 5 - Type and Effect Systems.

In the book (page 286) there is an example:

I can not understand, why do they start with the expression funF f x => ... Is this maybe guessing the values of the types of the bound variables f and x, we just assume that f is of type function that takes a function and returns a function, and that x is of type function (because it seems about right)?

After that we have to determine the type of the function body, so we move to f (fnY y => y). From the assumed types, we infer the type f (the top bottom rule in the image), and then we move to: fnY y => y, and it is streightforward that the type of that expression is t -> t (a function that returns the same type that it was given).

Now that we know this, we can determine the type of the function application f (fnY y => y) as t -> t.

Then again we move up to determine the type of funF ... which accoring to the rules for recursive functions is streight-forward (the same type as for f in the assumed typed enviroment)..

Then we move to the in body: g (fnZ => z). g's type has already been determined immediatly above, so we move to fnZ => z which is the same as for fnY. And in the end we get that the whole expression has the type: t -> t.

What i'm asking is: is this the correct trail of thought? Or i'm i missing the point somewhere?

I'm not sure about the guessing part, why did we start where we did, why not more simply with variable y or somewhere else? The rest of the chapter heavily depends on the proper understanding of these concepts, and i would like to understand this.

In general, I would be grateful if somebody could point me out to a book or something that, in more details explains type systems.

Thank you!

### QuantOverflow

#### How to automate tracking of changes in stock symbols?

A way is to write some code, crawl Yahoo Finance, store the stock names and symbols in a database. Crawl periodically, and if there exists a new symbol, check if there exists the same stock name in the database. If the stock name exists, we know that this stock has changed her symbol.

However, what if the stock changes both its name and symbol? Is it possible to check?

### CompsciOverflow

#### Explaining the difference between computer science and computer literacy

What is a good metaphor or example to explain to an English major the difference between classical computer science and "being good with using MS-Windows"

• computer science
• computer programming
• using computers

3 profoundly different things. Most people have no idea what Computer Science even is. They just see the word "computer". Hence, "he is a Computer Science major" can be interpreted as "He can hook up my printer". Or that he's "good with computers". Even fewer people know the difference between computer programming and Computer Science.

Computer Science is computing theory. CS can be learned without actual computers. CPU micro architecture. How to sort numbers, how to traverse lists, etc. State machines. Algorithms, big(Oh), etc. How to design a programming language or compiler.

Programming is writing code and creating applications in a language and compiler created by a computer scientist.

Lastly, there is using a computer (using a GUI, mouse, and keyboard. Internet, MS-Office, etc)

Yet all three of these are used interchangeably by laymen.

What is a good metaphor or example to explain to an English major the difference between classical computer science and "being good with using MS-Windows" Or simply, a pithy example of how real computer science has nothing to do with using MS-Windows.

### Fefe

#### Eine Pro-Russische Verschwörungssite gerüchtet, die ...

Eine Pro-Russische Verschwörungssite gerüchtet, die Goldreserven der Ukraine würden gerade ausgeflogen.

#### Bug des Tages: Actual code used for voting in a real ...

Bug des Tages: Actual code used for voting in a real country. Wisst ihr, was da fehlt? goto fail!

### Fred Wilson

#### Video Of The Week: Jack Dorsey At The 99% Conference

Back in 2010, Scott Belsky asked me to give a talk at The 99% Conference. That’s when and where I delivered the 10 Ways To Be Your Own Boss talk.

Jack Dorsey followed me on stage and delivered this 15 minute talk. I sat in the audience for Jack’s talk and loved it. So I thought I’d feature it here this week. It’s four year old but as relevant today as ever.

### StackOverflow

#### java.lang.RuntimeException: could not find scala-library.jar

I have a play project which is built using java 1.7, Play 2.2.0 & I am trying to create Eclipse project files for my project using following commands:

 F:\Projects\test>play

[test] $clean [test]$ compile

[test] $eclipse with-source=true  But it is throwing following error: - java.lang.RuntimeException: could not find scala-library.jar at play.PlayEclipse$$anon7$$anonfun$createTransformer$3$$anonfun3.apply(PlayEclipse.scala:80) at play.PlayEclipse$$anon$7$$anonfuncreateTransformer3$$anonfun$3.apply(PlayEclipse.scala:80) at scala.Option.getOrElse(Option.scala:120) at play.PlayEclipse$$anon7$$anonfun$createTransformer$3.apply(PlayEclipse.scala:80) at play.PlayEclipse$$anon7$$anonfun$createTransformer$3.apply(PlayEclipse.scala:79) at scalaz.Validation$class.map(Validation.scala:114)
at scalaz.Success.map(Validation.scala:343)
at play.PlayEclipse$$anon7.createTransformer(PlayEclipse.scala:79) at com.typesafe.sbteclipse.core.Eclipse$$anonfun$5$$anonfunapply4$$anonfun$6.apply(Eclipse.scala:120)
at com.typesafe.sbteclipse.core.Eclipse$$anonfun5$$anonfun$apply$4$$anonfun6.apply(Eclipse.scala:120) at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
at scala.collection.TraversableLike$$anonfunmap1.apply(TraversableLike.scala:244) at scala.collection.immutable.List.foreach(List.scala:318) at scala.collection.TraversableLikeclass.map(TraversableLike.scala:244) at scala.collection.AbstractTraversable.map(Traversable.scala:105) at com.typesafe.sbteclipse.core.Eclipse$$anonfun$5$$anonfunapply4.apply(Eclipse.scala:120) at com.typesafe.sbteclipse.core.Eclipse$$anonfun$5$$anonfunapply4.apply(Eclipse.scala:116) at scala.OptionWithFilter.map(Option.scala:206) at com.typesafe.sbteclipse.core.Eclipse$$anonfun$5.apply(Eclipse.scala:116) at com.typesafe.sbteclipse.core.Eclipse$$anonfun5.apply(Eclipse.scala:115) at scala.collection.TraversableLike$$anonfun$flatMap$1.apply(TraversableLike.scala:251) at scala.collection.TraversableLike$$anonfunflatMap1.apply(TraversableLike.scala:251) at scala.collection.mutable.ResizableArrayclass.foreach(ResizableArray.scala:59) at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47) at scala.collection.TraversableLikeclass.flatMap(TraversableLike.scala:251) at scala.collection.AbstractTraversable.flatMap(Traversable.scala:105) at com.typesafe.sbteclipse.core.Eclipse.handleProjects(Eclipse.scala:115) at com.typesafe.sbteclipse.core.Eclipse.action(Eclipse.scala:101) at com.typesafe.sbteclipse.core.Eclipse$$anonfun$eclipseCommand$2.apply(Eclipse.scala:82) at com.typesafe.sbteclipse.core.Eclipse$$anonfuneclipseCommand2.apply(Eclipse.scala:82) at sbt.Command$$anonfun$applyEffect$1$$anonfunapply2.apply(Command.scala:60) at sbt.Command$$anonfun$applyEffect$1$$anonfunapply2.apply(Command.scala:60) at sbt.Command$$anonfun$applyEffect$2$$anonfunapply3.apply(Command.scala:62) at sbt.Command$$anonfun$applyEffect$2$$anonfunapply3.apply(Command.scala:62) at sbt.Command.process(Command.scala:95) at sbt.MainLoop$$anonfun$1$$anonfunapply1.apply(MainLoop.scala:87) at sbt.MainLoop$$anonfun$1$$anonfunapply1.apply(MainLoop.scala:87) at sbt.State$$anon$1.process(State.scala:176)
at sbt.MainLoop$$anonfun1.apply(MainLoop.scala:87) at sbt.MainLoop$$anonfun$1.apply(MainLoop.scala:87) at sbt.ErrorHandling$.wideConvert(ErrorHandling.scala:18)
at sbt.MainLoop$.next(MainLoop.scala:87) at sbt.MainLoop$.run(MainLoop.scala:80)
at sbt.MainLoop$$anonfunrunWithNewLog1.apply(MainLoop.scala:69) at sbt.MainLoop$$anonfun$runWithNewLog$1.apply(MainLoop.scala:66)
at sbt.Using.apply(Using.scala:25)
at sbt.MainLoop$.runWithNewLog(MainLoop.scala:66) at sbt.MainLoop$.runAndClearLast(MainLoop.scala:49)
at sbt.MainLoop$.runLoggedLoop(MainLoop.scala:33) at sbt.MainLoop$.runLogged(MainLoop.scala:25)
at sbt.xMain.run(Main.scala:26)
at xsbt.boot.Launch$$anonfunrun1.apply(Launch.scala:57) at xsbt.boot.Launch.withContextLoader(Launch.scala:77) at xsbt.boot.Launch.run(Launch.scala:57) at xsbt.boot.Launch$$anonfun$explicit$1.apply(Launch.scala:45)
at xsbt.boot.Launch$.launch(Launch.scala:65) at xsbt.boot.Launch$.apply(Launch.scala:16)
at xsbt.boot.Boot$.runImpl(Boot.scala:32) at xsbt.boot.Boot$.main(Boot.scala:21)
at xsbt.boot.Boot.main(Boot.scala)
[error] could not find scala-library.jar
[error] Use 'last' for the full log.


### /r/emacs

#### I forget what's the plugin is, can you help me?

It's a plugin used to write lisp. It will show something like (defun xx) or (BODY) in minibuffer while you are editing.

I forget whether it is slime or something else. Thank you

submitted by goofansu

### StackOverflow

#### Android studio on raspberry pi

I'm tring to run android studio on raspberry pi(with raspbian). I've instalked openjdk-7-jdk but I get an error : http://pastebin.com/WvFRi7S9

So, is there any solution?

#### Strange type of scala "=>:[_, _]"

I saw "A =>: A" or "=>:[A, A]" type signature in scalaz.

https://github.com/scalaz/scalaz/blob/scalaz-seven/core/src/main/scala/scalaz/Category.scala

I think this is some kind of scala embedded type but what's this?

#### Scala system process hangs

I have an actor that uses ProcessBuilder to execute an external process:

  def act {
while (true) {
case param: String => {
val filePaths = Seq("/tmp/file1","/tmp/file2")
val fileList = new ByteArrayInputStream(filePaths.mkString("\n").getBytes())
val output = s"myExecutable.sh ${param}" #< fileList !!< doSomethingWith(output) } } } }  I run hundreds this actors running in parallel. Sometimes, for an unknown reason, the execution of the process (!!) never returns. It hangs forever. This specific actor cannot handle new messages. Is there any way to setup a timeout for this process to return, and if it exceeds retry? What could be the reason for these executions to hold forever? Because these commands are not supposed to last more than a few milliseconds. Edit 1: Two important facts that I observed: 1. This problem does not occur on Max OS X, only in Linux 2. When I don't use ByteArrayInputStream as input for the execution, the program does not hang ### Planet Clojure #### Combining Clojure and ClojureScript Libraries ClojureScript is an incredible new project that any Clojurian can appreciate. What could be better than replacing vexing JavaScript with the clean, flowing syntax of Clojure? But ClojureScript is very new, and in some cases its lack of polish can be frustrating. One of these frustrating aspects is ClojureScript's failure to complete abstract JavaScript using Clojure syntax. This comes from the fact that ClojureScript does not currently have all the functionality of its counterpart. For example, Clojure's bound? function is not implemented in ClojureScript (see "bound?" in ClojureScript devnotes). That may seem trivial but if you're hoping to transfer a piece of Clojure code to ClojureScript, and that code has a bound? call, you must learn additional aspects of ClojureScript to get your code to work. You will also have to maintain the new ClojureScript version of your code along with your pre-existing Clojure version. These small differences can add up and before you know it you could be maintaining two separate Libraries. The kicker is of course that these libraries have about 95% code in common but the 5% of difference simply cannot be resolved. We encountered this problem here at 8th Light while extending our Clojure testing framework Speclj to ClojureScript. There were numerous ClojureScript incompatibilities sprinkled throughout the existing Clojure library. In our original solution, we built a pre-compiler that would run against the Clojure code. This program could switch out all of the incompatibilities with their ClojureScript equivalents. However, the solution came with a few major drawbacks. First, the pre-compiler was a one-off design, built specifically for the Speclj project. There was little chance that it would be maintained, which introduced an added layer of fragility to our project. More importantly, the ClojureScript version still was its own library. This led to the confusing distinction between Speclj (Clojure) and Specljs (ClojureScript). Both libraries had to be added independently to a project. Recently, we took another look at our two massively similar libraries and asked if, with the changes to the Clojure/ClojureScript landscape, we could do better. Not to spoil the fun, but we were able to combine our two libraries into one. Unfortunately it did take a few absolutely-necessary hacks but that is the cost of working with such an exciting, constantly changing technology like ClojureScript. We definitely look forward to the day when ClojureScript becomes an effortless abstraction of JavaScript but until then a little creativity will have to suffice. ## Setting Up Your Base Project: A Sample Base Project is available if you would like to use it. It already has Speclj installed and running for both Clojure and ClojureScript. It also has branches with working code for each part of this tutorial. We will be referencing this project throughout this tutorial. Regardless of whether you use the sample project, your project structure should look similar to its structure. This is especially true for your src/ and spec/ paths. You will want a structure that resembles src/file-type/project-name/code.file-type. For example, in the sample project we have src/clj/myproject/core.clj for Clojure source and src/cljs/myproject/core.cljs for the ClojureScript source. A similar structure should be used for your tests. myproject | |--- test | |--- clj | | |--- myproject | | |--- test-code.clj | |--- cljs | |--- myproject | |--- test-code.cljs |--- src |--- clj | |--- myproject | |--- source-code.clj |--- cljs |--- myproject |--- source-code.cljs  ### Dividing Your ClassPaths with Profiles: The Clojure and ClojureScript versions of your project, although residing within the same jar, will have different classpaths. It is important that you isolate these classpaths in a way that will allow you to test and run the two portions of your library independently. This can be done by adding separate profiles to your project.clj file. Let’s make :clj and :cljs profiles. Both profiles will use the org.clojure/clojure and speclj dependency so they should remain outside the :profile map. That means that all we need for the :clj profile is: 1 :clj { 2 :source-paths ["src/clj"] 3 :test-paths ["spec/clj"] 4 }  For the :cljs profile, we will need the standard :cljsbuild information. Additionally, the sample base project uses speclj for testing so we will see some syntax necessary for Speclj as well.  1 :cljs {:dependencies [[org.clojure/clojurescript "0.0-2014"] 2 [org.clojure/tools.reader "0.7.10"] 3 [lein-cljsbuild "1.0.2"]] 4 :plugins [[lein-cljsbuild "1.0.2"]] 5 6 :cljsbuild ~(let [run-specs ["bin/speclj" "target/tests.js"]] 7 {:builds 8 {:dev {:source-paths ["src/cljs" "spec/cljs"] 9 :compiler {:output-to "target/tests.js" 10 :pretty-print true} 11 :notify-command run-specs}} 12 :test-commands {"test" run-specs}}) 13 } 14 }  Now let make our lives easier by adding testing aliases to our project. Here is the alias to run your Clojure speclj tests using the :clj profile: 1 :aliases { "clj-test" ["with-profile","clj","spec"] 2 }  The ClojureScript testing alias looks very similar: 1 :aliases { "clj-test" ["with-profile","clj","spec"] 2 "cljs-test" ["with-profile","cljs", "cljsbuild", "test"] 3 }  Now, if you have Speclj configured correctly, you can run lein clj-test and lein cljs-test from the command line to run your Clojure and ClojureScript tests respectively. ## Working With CLJX: sample code through CLJX Next, we will use the CLJX library. CLJX translates a .cljx files into separate .cljs and .clj files. You can use small #+cljs and #+clj tags to differentiate which forms you would like included in which version. In the 8th Light Speclj project, CLJX replaced our hand-rolled pre-compiler. This gave us the benefit of relying on an open-source, updated dependency instead of our own program. However, CLJX comes with a few downsides. First, you will have to keep track of the status of your cljx results. If you make a change in a .cljx file and, for whatever reason, do not recompile the cljx folder, your changes will not appear in your .clj and .cljs files. Second, You should be careful to not make changes to the generated .clj and .cljs files since they will be overridden the next time you generate your cljx output. Third, if you are running a test autorunner, it will likely not pick up changes to .cljx files. So CLJX comes with a cost, but it does allow you to keep a relatively similar code base for your Clojure and ClojureScript libraries. ### Adding CLJX to your Project.clj To add CLJX simply add [com.keminglabs/cljx "0.3.1"] to your general :dependencies map. You will then have to configure source and output paths in the :cljx key:  1 :cljx {:builds [{:source-paths ["src/cljx"] 2 :output-path "target/generated/src/clj" 3 :rules :clj} 4 {:source-paths ["src/cljx"] 5 :output-path "target/generated/src/cljs" 6 :rules :cljs} 7 {:source-paths ["spec/cljx"] 8 :output-path "target/generated/spec/clj" 9 :rules :clj} 10 {:source-paths ["spec/cljx"] 11 :output-path "target/generated/spec/cljs" 12 :rules :cljs}]}  So now every .cljx file in our src/cljx folder will be translated to separate .clj and .cljs version, which will be stored in the target/generated/src/ folder. It also will help to add the cljx hooks so that cljx will automatically build your files when you attempt a normal leiningen command: 1 :hooks [cljx.hooks]  ### Configuring Your Source-Paths and Test-Paths With cljx installed, we will have to change our source-paths and test-paths so that they look for the files generated by cljx. For your :clj profile, simply modify the paths like below: 1 :clj { 2 :source-paths ["src/clj", "target/generated/src/clj"] 3 :test-paths ["spec/clj", "target/generated/spec/clj"] 4 }  For your :cljs profile you new source and test resources will both go in :source-paths collection: 1 {:source-paths ["src/cljs" 2 "spec/cljs" 3 "target/generated/src/cljs" 4 "target/generated/spec/cljs"] 5 :compiler {:output-to "target/tests.js" 6 :pretty-print true} 7 :notify-command run-specs}  ### Making Things Easy with Aliases We're almost done. With cljx hooks in your project.clj, cljx will auto-generate files before you run you Clojure tests. However, for ClojureScript we'll have to test leiningen to compile before we run tests. We can do this easily by changing our cljs-test alias: 1 :aliases { "cljs-test" ["do" "cljx," "with-profile" "cljs" "cljsbuild" "test"] }  We also are little more concerned about the target directory. So we may want to clean that directory before regenerating our cljx code. We can add a few aliases that help us with that: 1 :aliases { "clj-clean-test" ["do" "clean," "clj-test"] 2 "cljs-clean-test" ["do" "clean," "cljs-test"] 3 }  Lastly, we can add a final alias that will run both our clj and cljs tests in one command: 1 :aliases { "all-tests" ["do" "clean," "clj-test," "cljs-test"] }  Now our project.clj file is updated. We should now be able to add a .cljx file to our project and it will generate separate but similar .clj and .cljs files. ### Adding a .cljx File to your Project If you are using the sample project you will see that we already have src/clj/myproject/core.clj and src/cljs/myproject/core.cljs. We will create a similar directory structure for the .cljx files. Let us make a src/cljx/myproject/ folder and add shared_file.cljx to the new folder. Next, let us make a spec/cljx/myproject/ folder and add shared_file_spec.cljx. In shared_file.cljx add: 1 (ns myproject.shared-file) 2 3 (defn multiply [x y] 4 (* x y) 5 )  * In shared_file_spec.cljx add:  1 (ns myproject.shared-file-spec 2 (#+clj :require #+cljs :require-macros [speclj.core :refer [describe it should=]]) 3 (:require [speclj.core] 4 [myproject.shared-file :as shared-file])) 5 6 (describe "sample cljx file" 7 8 (it "uses cljx files to generate tested code in clj an cljx" 9 (should= 12 (shared-file/multiply 3 4))) 10 )  As you can see, the spec file includes #+clj and #+cljs tags. For this file, that means that cljx will add :require to the clojure version of the file and :require-macros to the Clojure version of the file. These tags will include or exclude the entire form that follows them. So one tag can include/exclude an entire function. As a side note, the :require-macros key is the way we get access to our Clojure macros in ClojureScript. Since Speclj uses macros for both Clojure and ClojureScript the small #+ tags let us properly import the macros for both platforms. ### Run our Tests with A CLJX file Now that we have cljx set up and a .cljx file and test file, we can run lein clj-clean-test and lein cljs-clean-test. Both should evaluate the multiply test included in the single .cljx file in their respective platforms by testing against the cljx generated code. Now we have a single .cljx source and spec file that will be generated into separate .clj and .cljs files. And we can test our code in both Clojure and ClojureScript. ## Using Platform Files to Isolate Library Differences sample code through Platform Files Now that we can write a single file that ultimately becomes separate .clj and .cljs files, we can look at how we're going to deal with the differences in the Clojure and ClojureScript platforms. We will isolate these differences in two files (one .clj and one .cljs) with the same file name and same namespace name. We will then reference this common namespace in our code. When our clojure code runs, the .clj namespace will execute, and when we run our ClojureScript code, our .cljs namespace will execute. Thus the rest of our files can be written without a need to focus on platform details. An example will illustrate the project design: ### Platform Files: An Example Let's say we want to use our platform's abs function to find the absolute value of a number. In Clojure this is done using the org.clojure/math.numeric-tower library, while ClojureScript would use JavaScript's Math/abs function. To set up this example, we'll make a .cljx function that uses abs Lets add an "absolute difference" functionality to shared_file.cljx. This functionality will simply find the absolute difference between two numbers. As always, we'll start with a test. Add the code below to the describe block in your shared_file_spec.cljx: 1 (it "finds the absolute difference between two numbers" 2 (should= 1 (shared-file/abs-diff -101 100)))  This test will simply help us decide if everything is working correctly. Now let's focus on the source code. Add the code below to your shared-file.cljx: 1 (ns myproject.shared-file 2 (:require [myproject.platform :as platform]))  And add your new function, which should look like this: 1 (defn absolute-difference [x y] 2 (- (platform/absolute-val x) (platform/absolute-val y)))  As you can see, we're referencing a platform namespace. This namespace will change based off of what platform we're executing. Let's now make our platform namespace files. Create platform.clj in src/clj/myproject/ and platform.cljs in src/cljs/myproject/ In platform.clj add: 1 (ns myproject.platform 2 (:require [clojure.math.numeric-tower :as math])) 3 4 (defn abs [num] 5 (math/abs num))  You'll also have to add the numeric-tower dependency to your :clj profile in project.clj: 1 :dependencies [[org.clojure/math.numeric-tower "0.0.4"]]  Now let's move to the cljs side. In platform.cljs add: 1 (ns myproject.platform) 2 3 (defn abs [num] 4 (js/Math.abs num))  Now we have two like-named functions in two like-named namespaces. If we run our tests, they'll pass. This works because when Clojure runs, the platform.clj file will be used and the numeric-tower library will be executed. When ClojureScript runs, the platform.cljs file will be used and JavaScript's abs function will be executed. Our shared-file doesn't need to know about those details. It can simply call the platform namespace's function and receive the results. We now have a function, abs-diff that is written just once yet can be used for both Clojure and ClojureScript. This means that not only can we write a single code base that can run in Clojure and ClojureScript, it also means that the differences between the two platforms is isolated to the platform namespace. ## Using Platform Files with Macros sample code through Platform Files with Macros In the last section we used separate .clj and .cljs files with the same namespace name to isolate the platform differences between Clojure and ClojureScript. But what if we want to use these files in macros? You can still use these files in macros but there are a few tweaks needed to our previous platform strategy. The issue with macros is that there will be no equivalent .cljs macro file. Most macros will stay on the Clojure side of your project. Let's take a look at an example. ### Platform Files and Macros: An Example First, we need to tweak your platform.clj file. Add the snippet below to your :cljs profile. That way your Clojure macro will be available to your ClojureScript profile. 1 :source-paths ["src/clj"]  Now let's add a new test to our shared_file.cljx file which will test a simple macro. Create a macro.clj file in your src/clj/myproject/ folder. You can leave it empty for now. Next, add the dependency to your shared_file_spec.cljx under :require-macros. Your namespace should now look like this: 1 (ns myproject.shared-file-spec 2 (#+clj :require #+cljs :require-macros 3 [speclj.core :refer [describe it should=]] 4 [myproject.macros :as macros]) 5 (:require [speclj.core] 6 [myproject.shared-file :as shared-file]) 7 )  For this example, we'll add our existing platform/abs functionality to a macro. But tests come first! Add a new test that will evaluate an abs function located in your macro file: 1 (it "finds absolute value using macro" 2 (should= 2 (macros/abs -2)))  So now we have a test but it won't pass since we have nothing in our macro.clj file Let's now add a namespace and simple abs function to macro.clj. 1 (ns myproject.macros 2 ;(:require [myproject.platform]) ;uncommenting will break cljs tests 3 ) 4 5 (defmacro abs [x] 6 (myproject.platform/abs ~x))  Now here is where things get interesting. As you can see we've commented out the :require statement and our macro function uses a fully qualified namespace. It seems like our macro might not be able to find the platform namespace. But let's run our tests. They should pass! Now uncomment the :require statement and rerun your tests. Your ClojureScript tests will fail to run! It seems like we should :require our platform namespace since the file uses it, but this will actually break the ClojureScript-side tests. This is because the macro file will always attempt to evaluate the .clj version of our platform file if it is defined in the :require namespace. By using a fully-qualified namespace in our macro instead of referencing it through a :require statement, the correct platform file will be evaluated at macro expansion. So, when it comes to macros, you should not :require the platform file but instead use its fully-qualified namespace where it is needed. Now we've seen how to use platform files to isolate platform difference in both normal functions and clojure macros. These platform files can get you far, but they don't get you all the way there. In the next part of the tutorial we'll see how to use an ugly but effective "if" statement to get essentially complete cross-platform functionality. ## Powerful but Perilous Context-aware Macros sample code through Context-aware Macros In the previous portion of this tutorial, we were able to get a great deal of cross-platform functionality using platform files. But now we'll look at a way to define a clojure macro with platform-specific code, using a very fragile if statement. First, let's look at an example: ### Context-aware Macros: An Example In your shared_file_spec.cljx files add this test: 1 (it "catch slurp failure" 2 (should= true (macros/slurpable-file? "badfilename")))  In this test, we're attempting to slurp a bad file name. In both Clojure and ClojureScript, this will raise an exception. But exceptions are a little different in Java and JavaScript. Java will require an Exception while JavaScript will use a js/Ojbect. You can see the [Clojure documentation][clojure_documentation] for an example of both situations. Let's go to our macros.clj file and see what we can do to pass this test for both platforms. Here's what the Clojure version might look like. But of course it won't work in ClojureScript. ClojureScript won't know what to do with Exception. 1 (defmacro slurpable-file? [file-name] 2 (try 3 (slurp ~file-name) 4 (catch Exception e# true)))  Here's what the ClojureScript version might look like. But of course it won't work in Clojure. Clojure won't know what to do with js/Ojbect. 1 (defmacro slurpable-file? [file-name] 2 (try 3 (slurp ~file-name) 4 (catch js/Object e# true)))  ### Finding the Context We have two separate macros that simply won't work on both platforms. But what if we could determine if the macro.clj file was expanding in a Clojure context or a ClojureScript context. Maybe then we could use this information to build the correct macro for the currently-executing library. This is where a fragile "if" statement comes into play. It looks like this: 1 (defn cljs? [] 2 (boolean (find-ns 'cljs.analyzer)))  As you can see it determines if the file is running in a ClojureScript context if the cljs.analyzer namespace can be found. Otherwise, we will assume it is in a Clojure context. We can use this function to create a macro that can combine our two previous macros. 1 (defmacro slurpable-file? [file-name] 2 (try 3 (slurp ~file-name) 4 ~(if (cljs?) 5 '(catch js/Object e# true) 6 '(catch Exception e# true))) 7 )  If we try this macro, it will pass the tests for both platforms. It does this by adding the correct platform-specific catch statement during macro expansion. Thus we have a macro that, to some degree, is away of the context of its expansion. This may seem like it has opened up an amazing set of functionality but the if statement is fragile. It relies on the existence (or lack thereof) of a ClojureScript specific namespace. If something changed in ClojureScript, the entire library could fail. Thus using the cljs function above is a bit of hack. But it gets us where we want to go and there are few other options. ### ns-resolve Can Help Too As a brief side note, the hack noted above will still fail if the Clojure compiler cannot recognize the name of a currently absent ClojureScript namespace. An example is cljs.compiler/munge. If this included in a .clj macro, your Clojure-side tests will fails because Clojure will not find the namespace. However we can get around this using ns-resolve. Instead of referencing cljs.compiler/munge we can replace it with: 1 (ns-reolve 'cljs.compiler "munge")  This will essentially push the cljs.compiler namespace check to runtime. If you have your macro setup correctly, the cljs namespace should never be called in Clojure. So now we've seen how to use platform files to isolate platform-specific code and we've also seen a little hack that can help when macros must be defined in a platform-specific manner. In the next part of this tutorial we'll put it all together, quite literally, and combine our Clojure and ClojureScript libraries into a single jar. ## Adding clj and cljs Code to a Single Jar sample code through Single Jar In the previous parts of this tutorial we've built a code base that can deliver the same functionality in both Clojure and ClojureScript. Now we'll see how we can deploy this functionality in one Jar. This gives others the ability to import one library and gain both your clj and cljs functionality. Let's go to our project.clj file. We'll add an entirely new profile called "combined"  1 :profiles { 2 :combined { 3 :dependencies [[org.clojure/math.numeric-tower "0.0.4"] 4 [org.clojure/clojurescript "0.0-2014"] 5 [org.clojure/tools.reader "0.7.10"] 6 [lein-cljsbuild "1.0.2"]] 7 8 :source-paths ["src/clj", "target/generated/src/clj"] 9 :resource-paths ["src/cljs", "target/generated/src/cljs"] 10 :test-paths ["spec/clj", "target/generated/spec/clj"] 11 12 :cljsbuild ~(let [run-specs ["bin/speclj" "target/tests.js"]] 13 {:builds 14 {:dev {:source-paths ["src/cljs" 15 "spec/cljs" 16 "target/generated/src/cljs" 17 "target/generated/spec/cljs"] 18 :compiler {:output-to "target/tests.js" 19 :pretty-print true} 20 }} 21 :test-commands {"test" run-specs}}) 22 } 23 }  This profile effectively combines our :clj and :cljs profiles into one. You'll note that all of the dependencies or both profiles are added to the :combined profile. We've also added :resource-paths with src/cljs and target/generated/src/cljs. This will add our ClojureScript files to our jar, while the normal :source-paths will add our Clojure code. So let's test this new, combined profile. Here, an alias will be helpful: 1 "combined-tests" ["do" "clean," "with-profile" "combined" "spec," "with-profile" "combined" "cljsbuild" "test"]  Now if we run lein combined-tests everything should pass! If you lein with-profile combined jar and go to your target/ file, you can jar tf the generated .jar file and see how both the .clj and .cljs file are combined into the same library. All we have to do now is install our project using the combined profile. Again we'll use an alias to make things easy: 1 "install" ["do" "clean," "with-profile" "combined" "install"]  Now we can lein install and we will have a single library that will work for both Clojure and ClojureScript. That's the end of the tutorial. I hope you enjoyed it! ### StackOverflow #### gatling - extract cookie value string during test My tests run fine but now I need multiple sessions running at once. I've tried getting the cookie value using headerRegex("Set-Cookie", "HOME_SESSID=(.*)").saveAs("homeSessid") but when I print this out its returning a value of com.excilys.ebi.gatling.http.check.HttpMultipleCheckBuilder@6075598 I have no idea where this is coming from. My question is: what is going on? Thanks. edit: forgot to mention that the value its returning is not a session id and no matter what I use for the cookie name I get the same value. ### /r/netsec #### PREC: Practical Root Exploit Containment for Android Devices ### StackOverflow #### sonar-maven-plugin throws duplicated resource error I am trying to build and test a scala project. I am using scala-test instead surefire. I kept getting this duplicated resource error but I don't know where or what resource was duplicated. The command I used is "maven test sonar:sonar". Here is the output, INFO: SonarQube Server 3.7.4 [INFO] [16:36:12.615] Load batch settings [INFO] [16:36:12.650] User cache: /Users/carolyn_cheng/.sonar/cache [INFO] [16:36:12.653] Install plugins [INFO] [16:36:13.069] Install JDBC driver [INFO] [16:36:13.074] Create JDBC datasource for jdbc:mysql://localhost:3306/sonar [INFO] [16:36:13.873] Initializing Hibernate [INFO] [16:36:15.527] Load project settings [INFO] [16:36:15.547] Apply project exclusions [INFO] [16:36:15.676] ------------- Scan Orbit [INFO] [16:36:15.678] Load module settings [INFO] [16:36:15.790] Quality profile : [name=sonar,language=scala] [INFO] [16:36:15.801] Excluded tests: [INFO] [16:36:15.802] **/package-info.java [INFO] [16:36:15.829] Configure Maven plugins [INFO] [16:36:15.854] Compare to previous analysis (2014-03-03) [INFO] [16:36:15.862] Compare over 30 days (2014-02-05, analysis of 2014-02-26 13:57:06.0) [INFO] [16:36:15.869] Compare to previous version [INFO] [16:36:15.929] Base dir: /Users/carolyn_cheng/WorkSpace/Orbit [INFO] [16:36:15.929] Working dir: /Users/carolyn_cheng/WorkSpace/Orbit/target/sonar [INFO] [16:36:15.929] Source dirs: /Users/carolyn_cheng/WorkSpace/Orbit/src/main/java, /Users/carolyn_cheng/WorkSpace/Orbit/src/main/scala [INFO] [16:36:15.929] Test dirs: /Users/carolyn_cheng/WorkSpace/Orbit/src/test/java, /Users/carolyn_cheng/WorkSpace/Orbit/src/test/scala, /Users/carolyn_cheng/WorkSpace/Orbit/src/test/java/../scala [INFO] [16:36:15.929] Binary dirs: /Users/carolyn_cheng/WorkSpace/Orbit/target/classes [INFO] [16:36:15.929] Source encoding: UTF-8, default locale: en_US [INFO] [16:36:15.966] Sensor ScalaSourceImporterSensor... [ERROR] Duplicate source for resource: org.sonar.plugins.scala.language.ScalaFile@36961e51 [INFO] ------------------------------------------------------------------------ [INFO] BUILD FAILURE [INFO] ------------------------------------------------------------------------ [INFO] Total time: 48.765s [INFO] Finished at: Fri Mar 07 16:36:18 PST 2014 [INFO] Final Memory: 61M/445M [INFO] ------------------------------------------------------------------------ [ERROR] Failed to execute goal org.codehaus.sonar:sonar-maven-plugin:3.7.4:sonar (default-cli) on project Orbit: Duplicate source for resource: org.sonar.plugins.scala.language.ScalaFile@36961e51 -> [Help 1] [ERROR]  For some reasons, I always get clover-report.xml file even though I am not using clover at the moment. Please help. #### Clojure returns the list of all pairs in seq that have key as their first element I need to define a function called (get-all-pairs key seq) It returns the list of all pairs in seq that have key as their first element. If no pairs match, then the empty list is returned. For example,if I def pets (def pets '((cat 1) (dog 1) (fish 1) (cat 2) (fish 2)) )  (get-all-pairs 'cat pets) returns ((cat 1) (cat 2)), and (get-all-pairs 'bird pets) returns '(). Here is my try: (defn get-all-pairs [key seq] (cond (= key (first(first(seq)))) (cons (first seq) (get-all-pairs key (rest seq))) :else '()))  But it doesnot work. If I call it, it messages as follow: #'proj2.proj2/pets => (get-all-pairs 'cat pets) ClassCastException clojure.lang.PersistentList cannot be cast to clojure.lang.IFn proj2.proj2/get-all-pairs (proj2.clj:20)  I don't know where the problem is. How to fix it? ### QuantOverflow #### What's the underlying idea of definition of constrained market in Skiadas' Asset Pricing Theory? I'm self-studying Skiadas' Asset Pricing Theory, and find the definition of constrained market on page 21 confusing(you can find it here in the sample chapter). Deﬁnition 1.26. A constrained market is a closed convex set of cash ﬂows$X \subseteq \Bbb R^{1+K}$such that$0 \in X$and for some$\epsilon > 0$,$x \in X$and$0 < \| x \| <\epsilon$implies$\frac{\epsilon}{\| x \|}x \in X.$I know this definition renders missing market and short-sale constraints as special cases, but the underlying idea of this formulation still eludes me. ### StackOverflow #### Scala for loop with multiple variables How can I translate this loop (in Java) to Scala? for(int i = 0, j = 0; i < 10; i++, j++) { //Other code }  My end goal is to loop through two lists at the same time. I want to get both elements at the same time at each index in the iteration. for(a <- list1, b <- list2) // doesn't work for(a <- list1; b <- list2) // Gives me the cross product  ### TheoryOverflow #### Natural graph class with five excluded subgraphs? I'm interested in hereditary graph classes characterized by a small number of excluded subgraphs. There are some well-known graph classes that are characterized by three or four obstructions -- examples are threshold graphs, chain grapns and trivially perfect graphs. My question is: are there natural graph classes characterized by five obstructions? (No relation to the eponymous movie). It may be possible to obtain some by considering the P4-structure of the graph. ### StackOverflow #### Unable to import MySQLdb in ansible module I am trying to write custom module in ansible. while using import MySQLdb it is giving me error  failed: [127.0.0.1] => {"failed": true, "parsed": false} invalid output was: Traceback (most recent call last): File "/root/.ansible/tmp/ansible-1394199347.29-33439012674717/inventory", line 11, in <module> import MySQLdb ImportError: No module named MySQLdb  Using Python Version : 2.6.6 MySQL-python Version : 1.2.3 Python Code:- #!/usr/bin/python import datetime import sys import json import os import shlex import MySQLdb db = MySQLdb.connect("localhost","user","pwd","db_name" ) cursor = db.cursor() cursor.execute("SELECT * FROM hosts") data = cursor.fetchone() print data db.close()  I have written a playbook to run ansible module:- inventory.yaml:- --- - hosts: webservers user: root sudo: True vars: act: list tasks: - name: Run module inventory action: inventory act="{{act}}" prod="roop"  I'm running this playbook using below commands:- ansible-playbook -v playboook/path/inventory.yaml  Same code working in python command line (<<<) but not working in ansible module. In my ansible module other code are working. Is there any configuration setting need to do for ansible?? ### CompsciOverflow #### variant of the stable roommates problem The Stable Roommates Problem matches 2n participants into n sets of roommates based off of each participant's list of preferences. I was wondering if there was a variant of this problem where the number of roommates is different. For example, matching 10n participants into n sets. Thanks for the help. Edit: The Hospital Resident problem is also similar to this. Each hospital can take a certain number of residents. The problem is the residents list their preferences by hospital instead of other residents. ### Portland Pattern Repository #### Wiki Is Nota Dictionary (by 99-98-229-88.lightspeed.mssnks.sbcglobal.net 41 hours ago) #### Good Thinking Music (by 172-5-214-245.lightspeed.livnmi.sbcglobal.net 41 hours ago) ### Planet Clojure #### Leiningen Templates with Arguments This template is so wrong! Project templates can be as excellent as they can be awful since they are very opinionated beings: “a web project MUST be Compojure based!” “a network project MUST be Netty based!” “there is no way I am building a web project based on Compojure!” “a network project? of course ZeroMQ!” […] ### CompsciOverflow #### What is the name two mutually idempotent functions? To clarify, in haskell, there is an ord function that gives the byte integer of a character (i.e. ord 'a' yields 97); and there is a char function that takes the byte integer of a character and returns the character (i.e. char 97 yelds 'a'. What is the name of a collection of such functions? I'm not very mathematically literate (I'm working on it), but I found semirings on wikipedia. Do semirings provide an appropriate description? ### /r/compsci #### You know books like "The Elegant Universe" which explain really complicated physics problems in simple terms? Is there something like that for comp sci? Pretty much the title. I'm just starting to learn computer science and, before I dive into the REALLY technical stuff, I was wondering if people could suggest a broad overview in layman's terms. Edit: Wow, way more than I thought there were! I'm surprised I had problems finding them. Going on Amazon now and ordering a couple of these! submitted by OceansOnPluto [link] [39 comments] ### StackOverflow #### Ansible SSH forwarding doesn't seem to work with Vagrant OK, strange question. I have SSH forwarding working with Vagrant. But I'm trying to get it working when using Ansible as a Vagrant provisioner. I found out exactly what Ansible is executing, and tried it myself from the command line, sure enough, it fails there too. [/common/picsolve-ansible/u12.04%]ssh -o HostName=127.0.0.1 \ -o User=vagrant -o Port=2222 -o UserKnownHostsFile=/dev/null \ -o StrictHostKeyChecking=no -o PasswordAuthentication=no \ -o IdentityFile=/Users/bryanhunt/.vagrant.d/insecure_private_key \ -o IdentitiesOnly=yes -o LogLevel=FATAL \ -o ForwardAgent=yes "/bin/sh \ -c 'git clone git@bitbucket.org:bryan_picsolve/poc_docker.git /home/vagrant/poc_docker' " Permission denied (publickey,password).  But when I just run vagrant ssh the agent forwarding works correctly, and I can checkout R/W my github project. [/common/picsolve-ansible/u12.04%]vagrant ssh vagrant@vagrant-ubuntu-precise-64:~$ /bin/sh  -c 'git clone git@bitbucket.org:bryan_picsolve/poc_docker.git /home/vagrant/poc_docker'
Cloning into '/home/vagrant/poc_docker'...
remote: Counting objects: 18, done.
remote: Compressing objects: 100% (14/14), done.
remote: Total 18 (delta 4), reused 0 (delta 0)
Receiving objects: 100% (18/18), done.
Resolving deltas: 100% (4/4), done.
vagrant@vagrant-ubuntu-precise-64:~$ Has anyone got any idea how it is working? Update: By means of ps awux I determined the exact command being executed by Vagrant. I replicated it and git checkout worked.  ssh vagrant@127.0.0.1 -p 2222 \ -o Compression=yes \ -o StrictHostKeyChecking=no \ -o LogLevel=FATAL \ -o StrictHostKeyChecking=no \ -o UserKnownHostsFile=/dev/null \ -o IdentitiesOnly=yes \ -i /Users/bryanhunt/.vagrant.d/insecure_private_key \ -o ForwardAgent=yes \ -o LogLevel=DEBUG \ "/bin/sh -c 'git clone git@bitbucket.org:bryan_picsolve/poc_docker.git /home/vagrant/poc_docker' "  ### TheoryOverflow #### Does randomness buy us anything inside P? Let$\mathsf{BPTIME}(f(n))$be the class of the decision problems having a bounded two-sided error randomized algorithm running in time$O(f(n))$. Do we know of any problem$Q \in \mathsf{P}$such that$Q \in \mathsf{BPTIME}(n^k)$but$Q \not \in \mathsf{DTIME}(n^k)$? Is its non-existence proven? This question was asked on cs.SE here, but did not get a satisfactory answer. #### Ample sets for partial order reduction? I am learning aobut model checking, and I am having some trouble conceptualizing what ample sets are for partial order reduction. I don't fully understand why they need to satisfy these four conditions and what do these conditions mean? Also what is the difference between an ample set for a state and an enabled set for the same state in a transition system. Could someone give me a simple example of a transition system, and then define the ample sets for each state in that example state space? Any insight would be much appreciated. Thanks! ### StackOverflow #### What are the Clojure time and date libraries? I couldn't find libraries dealing with Time and Date in http://clojure.org/libraries. Are there any, or is this something I have to figure out how to do directly with Java? #### Play Framework: Specs2, each test running its own FakeApplication I am using PlayFramework 2.2.1, SBT 0.13 and I am facing problems while running multiple Specs2 tests using the play test command. Multiple examples in the Specs2 test start a FakeApplication as running(FakeApplication(withGlobal = globalStub, additionalConfiguration = Map(...)) { .... }  All the unit tests work fine but when it eventually reaches to running the IntegrationSpec that starts a TestServer as follows:  running(TestServer(10011, application = FakeApplication(additionalConfiguration = Map(...)))) { ..... }  TestServer is not started with the additionalConfiguration passed to the FakeApplication. If I run only the IntegrationSpec, then whatever is passed as additionalConfiguration to the TestServer is picked up and it runs fine. Why does running it along with other unit tests that are also starting FakeApplication fails? I tried adding the following to Build.scala but that did not help as well. val main = play.Project(appName, appVersion, appDependencies).settings( javaOptions in Test += "-Dconfig.file=conf/application.test.conf) ....)  ### Lobsters #### Wadler's Blog: Propositions as Types ### StackOverflow #### Dynamic load of fat jar's Using something like one-jar or sbt-assembly, what is the correct way to dynamically load a class from a fat jar? Single jar example: val loader = new URLClassLoader(Array(new File(jarName).toURI.toURL), this.getClass().getClassLoader()) var classToLoad = Class.forName (pluginName, true, loader) var method = classToLoad.getDeclaredMethod (methodName) var instance = classToLoad.newInstance () var result = method.invoke (instance) Console.println("Result: " + result)  This works fine for my package made jar, but if I create it via one-jar or assembly, it gets a java.lang.ClassNotFoundException exception. Do I need a custom class loader (and if so, where is it?) Or is there a special syntax needed on the class or package name? Thanks! -Greg (Example in Scala, but more than happy with a Java example!) #### Ansible playbook shell output I would like to quickly monitor some hosts using commands like ps,dstat etc using ansible-playbook. The ansible command itself perfectly does what I want, for instance I'd use: ansible -m shell -a "ps -eo pcpu,user,args | sort -r -k1 | head -n5"  and it nicely prints all std output for every host like this: localhost | success | rc=0 >> 0.0 root /sbin/init 0.0 root [kthreadd] 0.0 root [ksoftirqd/0] 0.0 root [migration/0] otherhost | success | rc=0 >> 0.0 root /sbin/init 0.0 root [kthreadd] 0.0 root [ksoftirqd/0] 0.0 root [migration/0]  However this requires me to keep a bunch of shell scripts around for every task which is not very 'ansible' so I put this in a playbook: --- - hosts: all gather_facts: no tasks: - shell: ps -eo pcpu,user,args | sort -r -k1 | head -n5  and run it with -vv, but the output baiscally shows the dictionary content and newlines are not printed as such so this results in an unreadable mess like this: changed: [localhost] => {"changed": true, "cmd": "ps -eo pcpu,user,args | sort -r -k1 head -n5 ", "delta": "0:00:00.015337", "end": "2013-12-13 10:57:25.680708", "rc": 0, "start": "2013-12-13 10:57:25.665371", "stderr": "", "stdout": "47.3 xxx Xvnc4 :24 -desktop xxx:24 (xxx) -auth /home/xxx/.Xauthority -geometry 1920x1200\n ....  I also tried adding register: var and the a 'debug' task to show {{ var.stdout }} but the result is of course the same. Is there a way to get nicely formatted output from a command's stdout/stderr when run via a playbook? I can think of a number of possible ways (format output using sed? redirect output to file on the host then get that file back and echo it to the screen?), but with my limited knowledge of the shell/ansible it would take me a day to just try it out. ### Planet Clojure #### Recommended Reading : Light Table and Clojure/ClojureScript Today, spending time looking at Light Table, a nice IDE that can be used with Clojure. You can download it here : Chris Granger has showed off Light Table on a number of occasions - And at Clojure/Conj 2013 : Finally, an interesting post recently highlighted using Kibit in conjunction with Light Table. Finally, a Hello World example. Go to more posts on clojure or that are clojure-related at http://digitalcld.com/cld/category/clojure. ### DragonFly BSD Digest #### Backing up Hammer to non-Hammer volumes Hammer’s ability to stream to remote disks is great, but what if you have storage that uses some other file system? Antonio Huete Jimenez put together a shell script that will dump out the contents of a Hammer PFS, for upload to whatever. Read the README for the details. ### StackOverflow #### SBT hangs resolving dependencies We have an SBT multi-project build with 17 projects: 1 leaf project, 15 modules that depend on the leaf (but not each other), and 1 aggregator that depends on the 15 modules. All of these projects list exactly the same set of external dependencies (libraryDependencies). For some reason, when we run the update command in the aggregator, it takes on the order of a minute per project (~15 minutes total!), even though there is not a single new dependency to resolve or download. Worse yet, we recently added one more dependency and now the update command causes SBT to swell up to ~5GB of memory and sometimes hang completely during resolution. How do we debug this? We tried YourKit to profile it and, it may be a read herring, but so far, the only thing we see is some sbt.MulitLogger class spending a ton of time in a BufferedOutputStream.flush call. ### DragonFly BSD Digest #### Note for docbook and upgrading If you are upgrading packages on your DragonFly 3.6 system, and you have docbook installed, there’s an extra step needed because of the moving around of several docbook packages. If you don’t have docbook installed – nothing to see here. ### CompsciOverflow #### How to prove the maintenance of the loop invariant? From CLRS Problem 2-3(c) algorithm for honers rule y=0 for i=n downto 0 y = Ai + x*y  I have the following loop invariant y =$\sum\limits_{k=0}^{n-(i+1)} A_{k+i+1}* x^{k}$Now to prove it holds for the next iteration , I have to prove$Ai+x*\sum\limits_{k=0}^{n-(i+1)} A_{k+i+1}* x^{k}$=$\sum\limits_{k=0}^{n-i} A_{k+i}* x^{k}$. How do I simplify R.H.S to get the L.H.S any hint? #### How to find Recurrence relation in terms of T(n) for a random binary search tree [on hold] I wanted to know how to make recurrence relation T(n) for a random BST. what terms are there in the relation and what approach to use to find the recurrence relation. This recurrence in average case accounts to O(logn) search time for BST. Thanks ### Lobsters #### Myths about /dev/urandom ### Lambda the Ultimate Forum #### The Evolution of CS Papers A blog post by Crista Lopes discussing the evolution of CS papers away from positions into the realm of science; excerpt: Note this (Dijkstra's GOTO considered harmful) was not a letter to the editor, it was a full-blown technical, peer-reviewed paper published in a Mathematical journal. Seen in the light of current-day peer reviewing practices in CS, even considering the fact that the paper was proposing a clear solution for a clear problem, this paper would likely be shot down. First, the problem for which the solution was proposed was clear but not well motivated: how important was it for programs [of the time] to call routines recursively? — maybe it wasn’t important at all! If it was important, what were other people doing about it? Any paper with only 1 reference in the bibliography goes immediately to the reject pile. Second, given that the concept had already been proposed (in writing) by H. Waldburger, the author of this paper would at least be requested to express his ideas in terms of how they were different from those described in H. Waldburger’s paper rather than spending most of the paper describing how to do procedure calls using stacks. Finally, the work is borderline an “engineering issue” rather than a scientific one. These are complaints often seen in today’s CS peer reviews. ### /r/scala #### I just made a new subreddit for quick and simple projects!(x-post r/learnprogramming) /r/ProgrammingPrompts is a place for people to post quick and easy programming projects for others to use to hone their skills. I hope it's a help! submitted by F1dd [link] [comment] ### TheoryOverflow #### Equivalence of categories of directed complete posets I asked this question there: http://math.stackexchange.com/questions/700975/equivalence-of-categories-of-directed-complete-posets. Since I had no answer, I try here. In the book Domains and Lambda-Calculi'' by Amadio and Curien, there is the following exercise: Define an equivalence between the category of partial morphisms generated by$(\mathcal{M}_S, \textbf{Dcpo})$and the category$\textbf{sCpo}$. The category$\textbf{Dcpo}$has as objects directed complete posets i.e. partially ordered sets such that any directed subset has a least upper bound and as morphisms continuous functions for the Scott topology (Scott opens of a dcpo$D$are subsets$O$of$D$such that (1)$x \in O \textit{ and } x \leq y \Rightarrow y \in O$and (2)$\Delta$directed and$\bigvee \Delta \in O \Rightarrow O \cap \Delta \not= \emptyset$). The category$\textbf{sCpo}$is a subcategory of$\textbf{Dcpo}$: objects are dcpo's with a least element$\bot$and morphisms are continuous functions$f$such that$f(\bot) = \bot$. For any dcpo$C$, we denote by$C_\bot$the object of$\textbf{sCpo}$obtained from$C$by adding a new element$\bot$, which is the least element of$C_\bot$. The "admissible family of monos"$\mathcal{M}_S$associates with every object$A$of$\textbf{Dcpo}$the class of monomorphisms$\mathcal{M}_S(A)$such that if$m \in \mathcal{M}_S(A)$, then (1)$m$is a monomorphism$D \rightarrow A$for some$D$and (2)$\textit{im}(m)$is a Scott open of$A$. The category of partial morphisms generated by$(\mathcal{M}_S, \textbf{Dcpo})$, denoted below by$\textbf{pC}$, has as objects dcpo's and as morphisms from$D$to$D'$equivalence classes$[m, f]$of representatives of partial morphisms$(m, f)$, where a representative$(m, f)$for a partial morphism from$A$to$B$is a pair of morphisms in$\textbf{Dcpo}$with$m : D \rightarrow A \in \mathcal{M}_S(A)$and$f \in \textbf{Dcpo}(D, B)$and$(m : D \rightarrow A, f: D \rightarrow B)$is equivalent to$(m': D' \rightarrow A, f': D' \rightarrow B)$iff there is an isomorphism$i : D \rightarrow D'$such that$m' \circ i = m$and$f' \circ i = f$. I had the idea to define the following functor$F: \textbf{pC} \rightarrow \textbf{sCpo}$: for any object$D$,$F(D) = D_\bot$; for any$[m, f] \in \textbf{pC}(D, D')$,$F([m, f])$is the morphism$g: D_\bot \rightarrow D'_\bot$defined by:$g(y) = f(x)$with$m(x) =y $if$y \in \textit{im}(m)$;$g(y) = \bot$if$y \notin \textit{im}(m)$. But it seems that it does not work. Indeed, consider the following dcpo's:$D = (\{ a, b \}, \leq)$with$a$and$b$not comparable;$D' = (\{ a', b' \}, \leq')$with$a' < b'$. I denote by$m$the monomorphism$D \rightarrow D'$defined by$m(a) = a'$and$m(b) = b'$. Notice that$[m, m] \not= [id_{D'}, id_{D'}]$, since all the morphisms from$D'$to$D$are constant. But$F([m, m]) = id_{D'_\bot} = F([id_{D'}, id_{D'}])$, hence$F$is not faithful. So I am not able to solve the exercise. (Assume that$\mathcal{M}_T$associates with every object$A$of$\textbf{Dcpo}$the class of monomorphisms$\mathcal{M}_T(A)$such that if$m \in \mathcal{M}_T(A)$, then (1)$m$is a monomorphism$D \rightarrow A$for some$D$, (2)$\textit{im}(m)$is a Scott open of$A$and (3)$m(x) \leq m(y) \Rightarrow x \leq y$. Consider the category$\textbf{pC'}$of partial morphisms generated by$(\mathcal{M}_T, \textbf{Dcpo})$. Then it seems that the functor$F' : \textbf{pC'} \rightarrow \textbf{sCpo}$defined as$F$(for any object$D$,$F'(D) = D_\bot$; for any$[m, f] \in \textbf{pC'}(D, D')$,$F'([m, f])$is the morphism$g: D_\bot \rightarrow D'_\bot$defined by:$g(y) = f(x)$with$m(x) =y $if$y \in \textit{im}(m)$;$g(y) = \bot$if$y \notin \textit{im}(m)$) is an equivalence of categories.) ### UnixOverflow #### FreeNAS 9.2: install FreeBSD packages I have installed FreeNAS 9.2amd64 (based on the same FreeBSD version) on a VirtualBox VM. I created users and pool/volume for my data. In anticipation of installing apache/mysql-server/php5/php-myadmin FreeBSD packages, I read the following docs from the freenas.org site: So I created a pluginjail to install packages within. I understood FreeNAS packages are managed by pkgng, which works almost the same than pkg_add/pkg_info/pkg_delete etc... Then I launched the following command (don't mind the package version) from that jail shell: $ pkg install mysql-server


And I get the following output:

Updating repository catalogue
pkg: No digest falling back on legacy catalog format


If I go to PACKAGESITE, I can find both digests.txz and repo.txz files.

Does anyone have an idea?

### Lobsters

#### Newsweek Writer: Standing by Bitcoin Founder Story

Here is another interview of her. She seems to have bitten off more than she can chew.

### QuantOverflow

#### Are public historical time series available for ratings of sovereign debt?

The nice list of free online data sources Data sources online does not mention any data from ratings agencies.

Are historical time series available for sovereign credit ratings (other than as proprietary datasets)?

I realise that the ratings agencies probably would claim their ratings of sovereign debt as proprietary data, yet due to the way this information is disseminated and used it appears to be in the public interest for it to be available freely, and it also appears possible to recover some parts of the time series from the public domain. However, I cannot locate any free sources.

The Guardian made available a snapshot in July 2010, but I would like to analyse how the ratings have changed over time. I would be happy with a subset where nontrivial changes have taken place: for instance, a series for just Greece and Japan would be of interest. I can of course deduce some of the few unchanging series myself.

### StackOverflow

#### javacTask: source release 1.7 requires target release 1.7

I have setup an Android project through sbt (0.13.1) in IDEA 13.0.2. It is mixed Java 7 and Scala 2.10.3. It uses the SBT support in IDEA.

Even though in my build.sbt I have the following:

 scalacOptions += "-target:jvm-1.7"

javacOptions ++= Seq("-source", "1.7", "-target", "1.7")


here is the result when I make the project with IDEA:

 java: javacTask: source release 1.7 requires target release 1.7


## March 07, 2014

### CompsciOverflow

#### Generalizing the linear subset scan algorithm to a wider class of objective functions, maybe by finding a paper

Given a list of pairs $(a_1,b_1),\ldots,(a_n,b_n)$, where all $a_i \geq 0$ and all $b_i > 0$, my general problem is when we can use linear subset scan (described below) to solve the optimization problem of finding the optimal combination of pairs,

$$I^* = \hbox{argmax}_I F(\sum_{i \in I} a_i) / G(\sum_{i \in I} b_i)$$

where $F,G$ are given increasing positive functions for positive inputs.

I have found a class of functions where there is a fast solution, namely $F(x) = x + A$ and $G(x) = (x + B)^\beta$ where $A,B \geq 0$ and $0 \leq \beta \leq 1$. In this case, the optimal solution can be found by sorting all pairs $(a_i,b_i)$ in decreasing order according to $a_i/b_i$, and then trying the first $k$ pairs in sorted order for all $k$ and choosing the best solution, and this gives the optimal solution. (This is an example of linear subset scan optimization.)

Now I want to know, if there a general class of functions $F,G$, ideally defined by abstract properties, where this linear subset scan approach works, where pairs are sorted either according to $a_i/b_i$ or perhaps sorted according to $H(a_i,b_i)$ where $H$ depends on $F,G$? This could be a question where someone has a new insight and proof, or simply someone has seen something like this in the literature. At any rate, I feel like the class of $F,G$ I stated where I have a proof is perhaps not the most general possible, and I'm missing some key abstract property that makes the linear subset scan work.

### StackOverflow

#### Configuring Coffeescript SBT in Build.scala not build.sbt?

Often I come across some instructions which tell me how to add a SBT tool to build.sbt, but actually I have a Build.scala, not a build.sbt. So I want to know how to do the same in my Build.scala?

The particular case that is causing me trouble is Coffeescript SBT which has instructions for how to add it to a build.sbt. However I don't have a built.sbt, I have a Build.scala, so I don't know what to do.

The code referenced here also helps to solve this problem.

#### Insertion order of a list based on order of another list

I have a sorting problem in Scala that I could certainly solve with brute-force, but I'm hopeful there is a more clever/elegant solution available. Suppose I have a list of strings in no particular order:

val keys = List("john", "jill", "ganesh", "wei", "bruce", "123", "Pantera")


Then at random, I receive the values for these keys at random (full-disclosure, I'm experiencing this problem in an akka actor, so events are not in order):

def receive:Receive = {
case Value(key, otherStuff) => // key is an element in keys ...


And I want to store these results in a List where the Value objects appear in the same order as their key fields in the keys list. For instance, I may have this list after receiving the first two Value messages:

List(Value("ganesh", stuff1), Value("bruce", stuff2))


ganesh appears before bruce merely because he appears earlier in the keys list. Once the third message is received, I should insert it into this list in the correct location per the ordering established by keys. For instance, on receiving wei I should insert him into the middle:

List(Value("ganesh", stuff1), Value("wei", stuff3), Value("bruce", stuff2))


At any point during this process, my list may be incomplete but in the expected order. Since the keys are redundant with my Value data, I throw them away once the list of values is complete.

Show me what you've got!

### Lobsters

#### Transitioning Mozilla Persona to Community Ownership

The Persona After-Action Report is also interesting.

### CompsciOverflow

#### Algorithm to Group Vertices of Graph

Given is the following graph which is logically divided into layers (with Dijkstra's shortest paths algorithm):

 Vertices   Layer

Root      0
/   \
A     B     1
/ \    |
C   D   E     2
\  |  /
\ | /
F         3


Now I'm looking for an algorithm which groups vertices when they have a (single) common ancestor in the previous layer, e.g. for the graph in the example the groups would be:

0: A, B
1: C, D
2: E
3: F


I know that this is doable by visiting vertices and comparing ancestors but I was wondering whether there is a well known algorithm for it.

Update: My question is really only related to find groups. I'm aware of the fact, that I can traverse vertices and test for incoming edges and group those vertices. Furthermore, the graph is fully constructed.

One (now deleted) answer mentioned DFS, which creates a search forest (as BFS creates a search tree which I basically used for levels, though I mentioned Dijkstra). So, I assume that combining BFS and DFS could give me the desired result.

### StackOverflow

#### Play/Scala - How display returned web request as html

I make a GET request to Google and Google returns a bunch of html for a login page. When I currently try to display that request it outputs the html as straight text, rather than a nice web page. How do I display the returned page?

Here's my function. I have /login set to route to the login function.

def login() = Action {

val duration = Duration(10, SECONDS)
val response = Await.result(future, duration)
Logger.info("Response: " + response.toString)
Logger.info("Status code: " + response.status.toString())

Ok(response.body)
}


### Planet FreeBSD

#### PC-BSD Weekly Feature Digest 20

New Sound Management

Work has began to fully port pulse audio into PC-BSD for 10.1, and we are quite pleased so far with the results. Â  Kris has been making headway this week getting pulse audio and itâ€™s related utilities working. Â In the meantime Ken has been working on an all new utilityÂ pc-mixer. Â pc-mixer is a complete front-end to the FreeBSD â€œmixerâ€� utility that will allow users a simple to use GUI and volume control for every day tasks. Â There will also be an advanced tab allowing for more specific audio setups and control.

Other News

*New PBIs for 9.x versions and 10.x versions were released this week, so be sure to check out the AppCafe and see whatâ€™s new.

*Gnome 3 and Cinnamon 2.0 desktops have received updates this week. Â These desktops are not 100% fully supported yet and as Â  Â such we can not make any guarantee on functionality.

*Grub 2.02 has been fully ported over and updated to GRUB 2.02-prerelease.

*Lastly The PC-BSDÂ ports tree has been frozen in preparation for our quarterly package update.

Improvements for Life-Preserver
* Add new â€œClassicâ€� backup dialog for custom exclusions and status updates
* Fix bug with restoring a file/dir into a missing directory on the main system.
* Clean up the restore tab

Bug Fixes
* Bugfixes to the FUSE â€œpbifsâ€� file-system
* Fix bug showing HPLIP drivers in the main CUPS Manager.
* Fix seg-fault crash in EasyPBI when removing a non-selected item.

### StackOverflow

#### Systemtap for java on ubuntu

I'd like, not only to trace the java process, but use the new support for openjdk tracing in systemtap, both the hotspot tracing and the method tracing.

Accordingly i installed the ddebs.ubuntu.com repository to install the kernel debuging symbols - i can then call a stap script that uses kernel tapsets but not yet java ones. I did notice a package called openjdk-7-jdk-dbgsym and tried to install it to see if this had the systemtap tapsets for the openjdk, but this conflicts with the openjdk-7-dbg package (which then ubuntu doesn't let me bug report since the openjdk-7-jdk-dbgsym package is not from the 'official' servers. And if i uninstall that one and install the other it doesn't help anyway.

Has anyone successfully did this on ubuntu?

edit: in order to build systemtap from source successfully on ubuntu with java byteman support you have to pass the

--with-java=/usr/lib/jvm/default-java


Otherwise building will not do the jars and so needed. Then you have to do make install follow the steps in the source dir java/README file (and don't forget to modify the path).

There is also another --with-dyninst option which i haven't tried but might 'fix' it for the other invocation modes

edit2: well, it compiles and even runs, but it never outputs nothing even on the examples given and with BYTEMAN_HOME set...

### TheoryOverflow

#### Chordal graph and its clique tree

A graph $G$ is chordal if it is the intersection graph of subtrees of a tree $T$. In particular $T$ can be chosen such that each node of $T$ corresponds to a maximal clique of $G$ and the subtrees $T_v$ consist of precisely those maximal cliques in $G$ that contain $v$. $T$ is then called the clique tree of $G$.

Now my question is the following.

Is any tree can be represented as a clique tree of some chordal graph?

Any counter example or hint of proof is welcome.

### CompsciOverflow

#### Does there exist a data compression algorithm that uses a large dataset distributed with the encoder/decoder?

If my goal were to compress say 10,000 images and I could include a dictionary or some sort of common database that the compressed data for each image would reference, could I use a large dictionary shared by the entire catalog and therefore get much smaller file sizes? Could this be expanded to work with images in general, i.e. to replace something like JPEG?

Are there existing compression systems that operate like this, where there is a large common set of bits transmitted and loaded before decompression, that has been built by analyzing many images?

For example, is there an existing computer science/machine learning research effort using sparse autoencoding over a large set of images and this concept of distributing a network derived from that encoding with the decompressor?

Note: I have deleted quite a bit of context from this question because it was claimed that this made the question too "opinionated". I also had to edit the title after someone else modified the title in such a way as to change the meaning of the question. If you are looking for clarification about my question, please ask.

submitted by _rs

### TheoryOverflow

#### List of theorems stating that P does not equal NP if and only if

I think it would be a good idea to make a list of theorems stating that P does not equal NP if and only if such and such exits, some complexity class is contained in another complexity class and so on and so forth.

### StackOverflow

Recently we started using PlayFramework and seeing some unusual activity in CPU load.

Machine details and other configurations:

32G Machine
12  Cores
PlayFramework 2.2.0
java -Xms1024m -Xmx1024m -XX:MaxPermSize=256m -XX:ReservedCodeCacheSize=128m
java applications are running within a docker container(Docker version 0.8.0).


There are 6 play server running behind nginx

PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND
31752 root      20   0 7876m 1.2g  14m S  716  3.8 150:55.28 java
26282 root      20   0 7862m 1.2g  14m S   48  3.8 310:51.65 java
56449 root      20   0 7789m 389m  13m S    2  1.2   0:33.10 java
40006 root      20   0 7863m 1.2g  14m S    2  3.8  17:56.41 java
42896 root      20   0 7830m 1.2g  14m S    1  3.8  15:10.30 java
52119 root      20   0 7792m 1.2g  14m S    1  3.7   8:48.38 java


The request rate is at max 100Req/s.

Has anyone faced this similar issues before? Please let me know.

#### Broken pipe exceptions from Redis client Jedis

We have a redis client calls from a play framework Application. This Redis calls are being made from an Actor using Akka Schedular. This scheduler runs every 60 secs which makes redis calls along with other JDBC calls. After scheduler has run for a few mins we start seeing following into the log files and app stops responding to any Redis client calls. This is my first encounter with Redis so any pointers, help is appreciated.

redis.host = localhost

redis.port = 6379

redis.timeout = 10

redis.pool.maxActive =110

redis.pool.maxIdle = 50

redis.pool.maxWait = 3000

redis.pool.testOnBorrow = true

redis.pool.testOnReturn = true

redis.pool.testWhileIdle = true

redis.pool.timeBetweenEvictionRunsMillis = 60000

redis.pool.numTestsPerEvictionRun = 10

Exception details:

redis.clients.jedis.exceptions.JedisConnectionException: java.net.SocketException: Broken pipe
at redis.clients.jedis.Connection.flush(Connection.java:69) ~[redis.clients.jedis-2.3.0.jar:na]
at redis.clients.jedis.JedisPubSub.subscribe(JedisPubSub.java:58) ~[redis.clients.jedis-2.3.0.jar:na]
............
at akka.actor.ActorCell.invoke(ActorCell.scala:456) [com.typesafe.akka.akka-actor_2.10-2.2.0.jar:2.2.0]
at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:237) [com.typesafe.akka.akka-actor_2.10-2.2.0.jar:2.2.0]
at akka.dispatch.Mailbox.run(Mailbox.scala:219) [com.typesafe.akka.akka-actor_2.10-2.2.0.jar:2.2.0]
at akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(AbstractDispatcher.scala:386) [com.typesafe.akka.akka-actor_2.10-2.2.0.jar:2.2.0] at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260) [org.scala-lang.scala-library-2.10.3.jar:na] at scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339) [org.scala-lang.scala-library-2.10.3.jar:na]
at scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979) [org.scala-lang.scala-library-2.10.3.jar:na]
Caused by: java.net.SocketException: Broken pipe
at java.net.SocketOutputStream.socketWrite0(Native Method) ~[na:1.7.0_51]
at java.net.SocketOutputStream.socketWrite(SocketOutputStream.java:113) ~[na:1.7.0_51]
at java.net.SocketOutputStream.write(SocketOutputStream.java:159) ~[na:1.7.0_51]
at redis.clients.util.RedisOutputStream.flushBuffer(RedisOutputStream.java:31) ~[redis.clients.jedis-2.3.0.jar:na]
at redis.clients.util.RedisOutputStream.flush(RedisOutputStream.java:223) ~[redis.clients.jedis-2.3.0.jar:na]
at redis.clients.jedis.Connection.flush(Connection.java:67) ~[redis.clients.jedis-2.3.0.jar:na]
... 15 common frames omitted


#### Adding resource generators from SettingKey of Seq[Reference] and Reference-scoped settings?

I have a SettingKey[Seq[Reference]] and I want to add resource generators for every Reference in a Seq that are dependent on settings scoped to the Reference itself, e.g.

resourceGenerators in Compile <++= theReferencesKey.map { (refs) =>
refs.map { ref => (name in ref, somethingElse in ref, resourcesManaged).map {
(name,somethingElse,resourcesDir) => {
// The resource generation returning Seq[File]
}
} }
}


I know, that this code does not work, because the types are completely wrong. I think, what I would need is

1. A monadic sequence operation: Seq[Task[T]] => Task[Seq[T]]

2. A monadic bind operation (flatMap) that works with the initialize type...?

I can't find either in the ScalaDoc

Where can I find this kind of information?

#### Loop through scala ListBuffer in java

If it is possible, how would I loop through a ListBuffer from within java. initialization of the ListBuffer (in scala)

var newModVersions: ListBuffer[NewModVersionEntry] = new ListBuffer[NewModVersionEntry]()


current smart for loop (in java)

for (VersionCheckHandler.NewModVersionEntry entry : XplosionCoreBL.newModVersions())


### StackOverflow

#### Scala's "for comprehension" in Javascript

I've gotten used to Scala's "for" construct and like it a lot. I am now writing some Javascript code and am using Lo-Dash (basically an extension of Underscore). Is there any way to mimic Scala's "for" comprehension in javascript?

My goal is to clean map/reduce code similar to this:

var rawCoordinates = _.map( directionsRoute.legs, function ( leg ) {
return _.map( leg.steps, function ( step ) {
return _.map( step.path, function ( latLng ) {
return [ latLng.lng(), latLng.lat() ];
} );
} );
} );
var finalCoordinates = _.flatten( rawCoordinates, true );


In the code, I'm producing a array of coordinates in the format [[coord1,coord2,coord3],[coord4,coord5]] where each coord is [39.5, -106.2] (it is an array of the coordinates of each Google Maps directions step.

In Scala, this same thing could be written like this (correct me if I'm wrong):

val stepCoordinates:List[List[Tuple2[Number,Number]]] = for {
leg <- legs;
step <- leg.steps;
path <- step.path;
latLng <- path
} yield (latLng.lng(), latLng.lat())


Thanks!

#### Intellij IDEA font smoothing in linux

I'm using IntelliJ IDEA on Arch Linux with KDE. OpenJDK version is 1.7.0_40.

Whole IDE fonts (includes code editor) are rendered without any antialiasing and font smoothing. I set in idea.properties file idea.use.default.antialiasing.in.editor to true, and added in _JAVA_OPTIONS variable -Dawt.useSystemAAFontSettings=on -Dswing.aatext=true without any effect.

What else I can try to enable font smoothing?

### CompsciOverflow

#### Ample sets for partial order reduction?

I am learning aobut model checking, and I am having some trouble conceptualizing what ample sets are for partial order reduction. I don't fully understand why they need to satisfy these four conditions and what do these conditions mean? Also what is the difference between an ample set for a state and an enabled set for the same state in a transition system.

Could someone give me a simple example of a transition system, and then define the ample sets for each state in that example state space? Any insight would be much appreciated.

Thanks!

### TheoryOverflow

#### What relations are there between a problem hardness and the hardness of verifying a witness?

Suppose you are given a Dominating Set instance, $<G,k>$.

Now suppose I give you a set of vertices $D$ of size $k$. Deciding whether $D$ is a dominating set of $G$ requires linear time in the size of $G$, and doesn't seem to be possible only by looking at the vertices of $D$.

In contrast, if we have a $k-path$ instance $<G,k>$ (asking whether a simple path of length $k$ exists in $G$), and I give you a tuple $P$ of $k$ vertices, you can verify that $P$ is a k-path by reading merely $k$ bits of the adjacency table.

In general, assume you have some graph problem that includes a parameter$L\subset \mathcal{G}\times \mathbb{N}$, whose witness is a set of $k$ vertices/edges ($k$ is the parameter of the problem).

Now we can define the set of problems which are "easy" to verify as:

$EasyVer=\{L\subset \mathcal{G}\times \mathbb{N}|$ a witness $w$ of $L$'s instance can be verified in $poly(k)$ time$\}$, i.e. independent of $|V|,|E|$.

It seems, for example, that $EasyVer \not\subset FPT$ and $FPT \not\subset EasyVer$, as

1. $VC\in FPT, VC \notin EasyVer$.
2. $Clique\in EasyVer$, while Clique is $W[1]-hard$.

• Does my definition even make sense?
• Are there known complexity classes with similar meaning?
• Any other complexity relations to known classes?
• Does it makes more sense to define $EasyVer$ in terms of the number of bits a verification algorithm needs to read from the adjacency matrix?
• Does generalizing the definition to $Ver_{f(n,k)}=\{L\subset \mathcal{G}\times \mathbb{N}|$ a witness $w$ of $L$'s instance can be verified in time $O(f(|G|,k))$ $\}$ makes sense?

### StackOverflow

#### Not possible to source .bashrc with Ansible

I can ssh to the remote host and do a source /home/username/.bashrc - everything works fine. However if I do:

- name: source bashrc
sudo: no


I get:

failed: [hostname] => {"cmd": ["source", "/home/username/.bashrc"], "failed": true, "rc": 2}
msg: [Errno 2] No such file or directory


I have no idea what I'm doing wrong...

#### Universal HealthCare - is it possible?

Recently, it has come to my attention of this strange case of Dr. Richard Arjun Karl, a minimal invasive spine surgeon, operating in NJ since mid 2004 - 2012.
I've read the official judgement and all related testimonies and findings.

### StackOverflow

#### sonar-maven-plugin throw nullPointer exception

I am new to this plugin. I have a Scala Project and I used sonar-maven-plugin version 3.7.4. Here is my pom file, org.codehaus.sonar sonar-maven-plugin 3.7.4

JDK is 1.7. SonarQube version is 3.7.4.

This is the maven command I used "maven sonar:sonar -e". I kept getting this i/o error.

Failed to execute goal org.codehaus.sonar:sonar-maven-plugin:3.7.4:sonar (default-cli) on project Orbit: null: MojoExecutionException: NullPointerException -> [Help 1] org.apache.maven.lifecycle.LifecycleExecutionException: Failed to execute goal org.codehaus.sonar:sonar-maven-plugin:3.7.4:sonar (default-cli) on project Orbit: null at org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:217) at org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:153) at org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:145) at org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject(LifecycleModuleBuilder.java:84) at org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject(LifecycleModuleBuilder.java:59) at org.apache.maven.lifecycle.internal.LifecycleStarter.singleThreadedBuild(LifecycleStarter.java:183) at org.apache.maven.lifecycle.internal.LifecycleStarter.execute(LifecycleStarter.java:161) at org.apache.maven.DefaultMaven.doExecute(DefaultMaven.java:320) at org.apache.maven.DefaultMaven.execute(DefaultMaven.java:156) at org.apache.maven.cli.MavenCli.execute(MavenCli.java:537) at org.apache.maven.cli.MavenCli.doMain(MavenCli.java:196) at org.apache.maven.cli.MavenCli.main(MavenCli.java:141) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at org.codehaus.plexus.classworlds.launcher.Launcher.launchEnhanced(Launcher.java:290) at org.codehaus.plexus.classworlds.launcher.Launcher.launch(Launcher.java:230) at org.codehaus.plexus.classworlds.launcher.Launcher.mainWithExitCode(Launcher.java:409) at org.codehaus.plexus.classworlds.launcher.Launcher.main(Launcher.java:352) Caused by: org.apache.maven.plugin.MojoExecutionException at org.sonar.maven.ExceptionHandling.handle(ExceptionHandling.java:37) at org.sonar.maven.SonarMojo.execute(SonarMojo.java:175) at org.apache.maven.plugin.DefaultBuildPluginManager.executeMojo(DefaultBuildPluginManager.java:101) at org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:209) ... 19 more Caused by: java.lang.NullPointerException at java.io.File.(File.java:277) at org.sonar.batch.scan.filesystem.DeprecatedFileSystemAdapter.resolvePath(DeprecatedFileSystemAdapter.java:132) at org.sonar.plugins.scala.surefire.SurefireSensor.analyse(SurefireSensor.java:53) at org.sonar.batch.phases.SensorsExecutor.execute(SensorsExecutor.java:72) at org.sonar.batch.phases.PhaseExecutor.execute(PhaseExecutor.java:114) at org.sonar.batch.scan.ModuleScanContainer.doAfterStart(ModuleScanContainer.java:142) at org.sonar.api.platform.ComponentContainer.startComponents(ComponentContainer.java:92) at org.sonar.api.platform.ComponentContainer.execute(ComponentContainer.java:77) at org.sonar.batch.scan.ProjectScanContainer.scan(ProjectScanContainer.java:187) at org.sonar.batch.scan.ProjectScanContainer.scanRecursively(ProjectScanContainer.java:182) at org.sonar.batch.scan.ProjectScanContainer.doAfterStart(ProjectScanContainer.java:175) at org.sonar.api.platform.ComponentContainer.startComponents(ComponentContainer.java:92) at org.sonar.api.platform.ComponentContainer.execute(ComponentContainer.java:77) at org.sonar.batch.scan.ScanTask.scan(ScanTask.java:57) at org.sonar.batch.scan.ScanTask.execute(ScanTask.java:45) at org.sonar.batch.bootstrap.TaskContainer.doAfterStart(TaskContainer.java:82) at org.sonar.api.platform.ComponentContainer.startComponents(ComponentContainer.java:92) at org.sonar.api.platform.ComponentContainer.execute(ComponentContainer.java:77) at org.sonar.batch.bootstrap.BootstrapContainer.executeTask(BootstrapContainer.java:156) at org.sonar.batch.bootstrap.BootstrapContainer.doAfterStart(BootstrapContainer.java:144) at org.sonar.api.platform.ComponentContainer.startComponents(ComponentContainer.java:92) at org.sonar.api.platform.ComponentContainer.execute(ComponentContainer.java:77) at org.sonar.batch.bootstrapper.Batch.startBatch(Batch.java:92) at org.sonar.batch.bootstrapper.Batch.execute(Batch.java:74) at org.sonar.runner.batch.IsolatedLauncher.execute(IsolatedLauncher.java:45) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at org.sonar.runner.impl.BatchLauncher$1.delegateExecution(BatchLauncher.java:87) at org.sonar.runner.impl.BatchLauncher$1.run(BatchLauncher.java:75) at java.security.AccessController.doPrivileged(Native Method) at org.sonar.runner.impl.BatchLauncher.doExecute(BatchLauncher.java:69) at org.sonar.runner.impl.BatchLauncher.execute(BatchLauncher.java:50) at org.sonar.runner.api.EmbeddedRunner.doExecute(EmbeddedRunner.java:102) at org.sonar.runner.api.Runner.execute(Runner.java:90) at org.sonar.maven.SonarMojo.execute(SonarMojo.java:173) ... 21 more

When I tried this command, "maven sonar:sonar -Dsonar.dynamicAnalysis=false", the problem went away. So it is analyzing related. How do I fix this problem?

Thanks for all the helps.

Regards, Carolyn

### StackOverflow

#### How to run project created by leiningen?

I'm running Debian Wheezy, openjdk-7-jre, clojure 1.4.0 and leiningen-1.7.1, all installed from official repo.

So I ran

lein new hello
cd hello
lein run -m hello.core


and saw an error:

Exception in thread "main" java.lang.RuntimeException: java.lang.ClassNotFoundException: hello.core
at clojure.lang.Util.runtimeException(Util.java:165)
at clojure.lang.RT.classForName(RT.java:2017)
at clojure.lang.Reflector.invokeStaticMethod(Reflector.java:206)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:601)
at clojure.lang.Reflector.invokeMatchingMethod(Reflector.java:92)
at clojure.lang.Reflector.invokeStaticMethod(Reflector.java:225)
at user$eval35.invoke(NO_SOURCE_FILE:1) at clojure.lang.Compiler.eval(Compiler.java:6465) at clojure.lang.Compiler.eval(Compiler.java:6455) at clojure.lang.Compiler.eval(Compiler.java:6431) at clojure.core$eval.invoke(core.clj:2795)
at clojure.main$eval_opt.invoke(main.clj:296) at clojure.main$initialize.invoke(main.clj:315)
at clojure.main$null_opt.invoke(main.clj:348) at clojure.main$main.doInvoke(main.clj:426)
at clojure.lang.RestFn.invoke(RestFn.java:421)
at clojure.lang.Var.invoke(Var.java:405)
at clojure.lang.AFn.applyToHelper(AFn.java:163)
at clojure.lang.Var.applyTo(Var.java:518)
at clojure.main.main(main.java:37)
Caused by: java.lang.ClassNotFoundException: hello.core
at java.net.URLClassLoader$1.run(URLClassLoader.java:366) at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
at java.security.AccessController.doPrivileged(Native Method)
at java.lang.Class.forName0(Native Method)
at java.lang.Class.forName(Class.java:264)
at clojure.lang.RT.classForName(RT.java:2013)
... 21 more


I never written anything before in Java so this is very cryptic for me. I tried to add

:main hello.core


to my project.clj file and then just

lein run


but it didn't helped me.

### StackOverflow

#### Play Framework: Sending request.body to WS using WS.put for any content type (maybe using byteArray?)

I have an api endpoint that takes in a put request for a file, authenticates it and sends the authenticated request to a remote web server endpoint. I am using play's WS object to make the remote put file call as follows. I tried using request.get.asBytes and it compiled fine but the remote server received an empty file.

def putFile(location:String, key:String) = Action.async {
request =>
val MAXCHUNKSIZE=10000000
val path= getPath(location,key)
val stringToSign= getStringToSign(location, key)
val rawBody=request.body.asRaw
val bodBytes = rawBody flatMap { x: RawBuffer => x.asBytes(MAXCHUNKSIZE) } getOrElse(Array.empty[Byte])
println("Request bodybytes: "+bodBytes.isEmpty)

val result = WS.url(path)
.put(bodBytes) map { response =>
println("Response: "+response.body)
Status(response.status)(response.body).as(response.ahcResponse.getContentType)
}
result


}

I have to confess, I have been a Java developer and very new to scala and play (1 month) and the last map function was given to me by my colleague. I am not fully able to comprehend it. Any help would be appreciated(1)

The result of the debugging println messages was that the bodBytes.isEmpty is true for text files which are not empty. Any help would be appreciated(2)

If I hardcode a string in place of bodBytes, it works and seems to send the request, but there is one issue. It tries to print the response.body and it seems to be stuck waiting. Whereas the file is already uploaded to the server. Any help would be appreciated(3)

The test file I am using is just a simple Lorem ipsum file and not bigger than 400 bytes. The reason I am using asRaw is because it can be a image, xml file etc. Any help preferably with scala code is appreciated.

#### play app repository path is not correct

I have a play app, when I ran sbt, I got an error saying

[warn]   http://repo.typesafe.com/typesafe/releases/com/typesafe/play/sbt-plugin/2.2.1/sbt-plugin-2.2.1.pom
[info] Resolving org.fusesource.jansi#jansi;1.4 ...
[warn]  ::::::::::::::::::::::::::::::::::::::::::::::
[warn]  ::          UNRESOLVED DEPENDENCIES         ::
[warn]  ::::::::::::::::::::::::::::::::::::::::::::::
[warn]  ::::::::::::::::::::::::::::::::::::::::::::::
[warn]
[warn]  Note: Some unresolved dependencies have extra attributes.  Check that these dependencies exist with the requested attributes.
[warn]      com.typesafe.play:sbt-plugin:2.2.1 (sbtVersion=0.13, scalaVersion=2.10)


But actually, the path of this plugin is http://repo.typesafe.com/typesafe/releases/com.typesafe.play/ rather than the failed trial http://repo.typesafe.com/typesafe/releases/com/typesafe/play. In my plugin.sbt, I have this according to the documentation:

resolvers += "Typesafe repository" at "http://repo.typesafe.com/typesafe/releases"



Could anybody help out?

# The Obligatory Sudoku Example

No discussion about constraint solvers is complete without the obligatory sudoku example. Unfortunately, sudoku is such a basic exercise for a constraint solver that it doesn’t really tell you much about the engine. But if anything, because sudoku is so standard, it has become a good way to get a feel for the style of the constraint DSL.

The Loco (0.2.0) version is concise and readable, with a nice separation between the core set of constraints that underlie all sudoku puzzles, versus the additional starter numbers provided by the puzzle.

In most constraint solvers, the constraint operators imperatively impose constraints on a variable store. In Loco, the constraint operators simply produce Clojure data structures. Since we’re working with Clojure data, assembling the full model is simply a matter of concatenating the sequence of constraints describing the fundamental rules of sudoku with the sequence of constraints specific to a given puzzle. This is a good example of how Loco lets you easily build portions of your model separately and then combine them or otherwise manipulate them with standard Clojure functions.

To follow the example, the main thing you need to know about Loco is that the notion of a subscripted variable, like gridi,j, is represented in Loco as a vector, e.g., [:grid i j]

First, here’s how we’re going to ultimately input the Sudoku grids to our solver:

;http://www.mirror.co.uk/news/weird-news/worlds-hardest-sudoku-puzzle-ever-942299(def worlds-hardest-puzzle  [[8 - - - - - - - -]   [- - 3 6 - - - - -]   [- 7 - - 9 - 2 - -]   [- 5 - - - 7 - - -]   [- - - - 4 5 7 - -]   [- - - 1 - - - 3 -]   [- - 1 - - - - 6 8]   [- - 8 5 - - - 1 -]   [- 9 - - - - 4 - -]])

Here’s the solving code:

(def basic-model  (concat    ; range-constraints    (for [i (range 9) j (range 9)]       ($in [:grid i j] 1 9)), ; row-constraints (for [i (range 9)] ($distinct (for [j (range 9)] [:grid i j]))),    ; col-constraints    (for [j (range 9)]      ($distinct (for [i (range 9)] [:grid i j]))), ; section-constraints (for [section1 [[0 1 2] [3 4 5] [6 7 8]] section2 [[0 1 2] [3 4 5] [6 7 8]]] ($distinct (for [i section1, j section2] [:grid i j])))))(defn solve-sudoku [grid]  (solution    (concat basic-model            (for [i (range 9), j (range 9)                  :let [hint (get-in grid [i j])]                  :when (number? hint)]              ($= [:grid i j] hint))))) Let’s test it out in the REPL: => (solve-sudoku worlds-hardest-puzzle){[:grid 4 0] 3, [:grid 5 1] 8, [:grid 6 2] 1, [:grid 7 3] 5, [:grid 8 4] 1, [:grid 3 0] 1, [:grid 4 1] 6, [:grid 5 2] 7, [:grid 6 3] 9, [:grid 7 4] 2, [:grid 8 5] 8, [:grid 2 0] 6, [:grid 3 1] 5, [:grid 4 2] 9, [:grid 5 3] 1, [:grid 6 4] 7, [:grid 7 5] 6, [:grid 8 6] 4, [:grid 1 0] 9, [:grid 2 1] 7, [:grid 3 2] 4, [:grid 4 3] 8, [:grid 5 4] 6, [:grid 6 5] 4, [:grid 7 6] 9, [:grid 8 7] 5, [:grid 0 0] 8, [:grid 1 1] 4, [:grid 2 2] 5, [:grid 3 3] 2, [:grid 4 4] 4, [:grid 5 5] 9, [:grid 6 6] 3, [:grid 7 7] 1, [:grid 8 8] 2, [:grid 0 1] 1, [:grid 1 2] 3, [:grid 2 3] 4, [:grid 3 4] 3, [:grid 4 5] 5, [:grid 5 6] 5, [:grid 6 7] 6, [:grid 7 8] 7, [:grid 0 2] 2, [:grid 1 3] 6, [:grid 2 4] 9, [:grid 3 5] 7, [:grid 4 6] 7, [:grid 5 7] 3, [:grid 6 8] 8, [:grid 0 3] 7, [:grid 1 4] 8, [:grid 2 5] 1, [:grid 3 6] 8, [:grid 4 7] 2, [:grid 5 8] 4, [:grid 0 4] 5, [:grid 1 5] 2, [:grid 2 6] 2, [:grid 3 7] 9, [:grid 4 8] 1, [:grid 0 5] 3, [:grid 1 6] 1, [:grid 2 7] 8, [:grid 3 8] 6, [:grid 0 6] 6, [:grid 1 7] 7, [:grid 2 8] 3, [:grid 0 7] 4, [:grid 1 8] 5, [:grid 0 8] 9, [:grid 8 0] 7, [:grid 7 0] 4, [:grid 8 1] 9, [:grid 6 0] 5, [:grid 7 1] 3, [:grid 8 2] 6, [:grid 5 0] 2, [:grid 6 1] 2, [:grid 7 2] 8, [:grid 8 3] 3} On my machine, this “hardest Sudoku” takes about 17ms to solve. Benchmarking against your favorite constraint solver on your machine, and pretty-printing the output as a readable grid are left as an exercise for the reader. ### StackOverflow #### Scala Parser and combinators: java.lang.RuntimeException: string matching regex \z' expected I am trying to parse some text following a grammer for Dynamic Epistemic Logic using Scala's RegexParser, as part of my Master Thesis. But I keep getting the same error on simple logical conjunctions. I understand where and why it's failing, but not why it's matching what it is in the first place. My Code (severely boiled down to isolate the problem): import scala.util.parsing.combinator._ class Formula() { def and(q:Formula) = Conjunction(this, q) // ∧ } abstract class Literal extends Formula abstract class Constant extends Formula case class Atom(symbol:String) extends Literal case class NotAtom(p:Atom) extends Literal case class Conjunction(p:Formula, q:Formula) extends Formula class mapParser extends RegexParsers { val conjOp = "&" val negOp = "~" val listseparator = "," val leftparen = "(" val rightparen = ")" def id:Parser[String] = "[a-z_]+".r // fluents are never capitalized. but may have underscore. def litargs: Parser[String] = repsep("[a-zA-Z]+".r,listseparator) ^^ {case list => "(" + list.toString.stripPrefix("List") + ")"} def atom: Parser[Atom] = id~leftparen~litargs~rightparen ^^ {case head~_~tail~_ => Atom(head+tail)} def negAtom: Parser[NotAtom] = negOp~>atom ^^ (NotAtom(_)) def literal: Parser[Literal] = negAtom | atom def and: Parser[Formula] = formula~conjOp~formula ^^ {case p1~_~p2 => Conjunction(p1,p2)} def formula: Parser[Formula] = literal | and }; object DomainParser extends mapParser { def test() = { val domainDesc ="present(A) & ~present(B)"; println("input: " + domainDesc) println("result: " + apply(domainDesc)) } def apply(domainDesc: String) = parseAll(formula, domainDesc) match { case Success(result, _) => result case failure : NoSuccess => scala.sys.error(failure.msg) } }  I am calling the DomainParser.test() function externally from java. The input is present(A) & ~present(B)  which should yield: Conjunction(Atom(present((A))),NotAtom(Atom(present((B)))))  but instead gives me the error: Exception in thread "main" java.lang.RuntimeException: string matching regex \z' expected but &' found at scala.sys.package$.error(package.scala:27)
at mAp.DomainParser$.apply(DEL.scala:48) at mAp.DomainParser$.test(DEL.scala:43)
at mAp.DomainParser.test(DEL.scala)
at ma.MA.main(MA.java:8)


Furthermore, if I call the 'and' parser directly instead of the 'formula' parser, it works fine. Hence the problem seems to be with this line:

def formula: Parser[Formula] = literal | and


Because it attempts to parse the whole line as a single literal. It then parses present(A) correctly, but instead of failing on the '&' (not part of literal's parser) and returning to parse as an 'and'-term, it fails with the exception.

I cannot for the love of... see why it tries to match any '\z' at all. It is not included in the grammar by me, and even if it was - shouldn't it fail and try to parse as the next term instead of exiting with an exception? I am torn between thinking there is some in-built functionality for end-of-string terms that I do not know, to thinking there is something hugely obvious staring me in the face.

Any help would be sorely needed, very welcome and thank you very much in advance.

Dan True

### Fefe

#### Auf dem 30C3 hatten wir ja diesen Foschepoth-Vortrag ...

Auf dem 30C3 hatten wir ja diesen Foschepoth-Vortrag über die Geschichte des Abhörens im Nachkriegsdeutschland. Einer der Punkte darin war, dass das deutsche G10-Gesetz explizit so gemacht wurde, wie es ist, weil die Alliierten darauf bestanden haben, und Deutschland eben kein souveräner Staat war.

Edward Snowden hat Fragen des EU-Parlaments beantwortet, und da hat er diesen Punkt auch noch einmal explizit angesprochen.

In einer Antwort auf Fragen des Europaparlaments schreibt Snowden: "Deutschland wurde bedrängt, sein G-10-Gesetz zu ändern, um die NSA zu befriedigen, und hat die verfassungsmäßigen Rechte deutscher Bürger untergraben." Das G-10-Gesetz regelt Eingriffe in die Telekommunikationsfreiheit und das Abhören von Telefonen.
Es erfüllt mich mit großer Freude, dass er das angesprochen hat. Er schreibt auch ausdrücklich, dass dies auf Druck der NSA geschehen ist. Und — der Teil war mir neu — dass es ähnliche Einflussnahme auch in anderen Ländern gab. Er nennt Schweden, die Niederlande und Neuseeland.

### StackOverflow

#### RxJava: how to compose multiple Observables with dependencies and collect all results at the end?

I'm learning RxJava and, as my first experiment, trying to rewrite the code in the first run() method in this code (cited on Netflix's blog as a problem RxJava can help solve) to improve its asynchronicity using RxJava, i.e. so it doesn't wait for the result of the first Future (f1.get()) before proceeding on to the rest of the code.

f3 depends on f1. I see how to handle this, flatMap seems to do the trick:

Observable<String> f3Observable = Observable.from(executor.submit(new CallToRemoteServiceA()))
.flatMap(new Func1<String, Observable<String>>() {
@Override
public Observable<String> call(String s) {
return Observable.from(executor.submit(new CallToRemoteServiceC(s)));
}
});


Next, f4 and f5 depend on f2. I have this:

final Observable<Integer> f4And5Observable = Observable.from(executor.submit(new CallToRemoteServiceB()))
.flatMap(new Func1<Integer, Observable<Integer>>() {
@Override
public Observable<Integer> call(Integer i) {
Observable<Integer> f4Observable = Observable.from(executor.submit(new CallToRemoteServiceD(i)));
Observable<Integer> f5Observable = Observable.from(executor.submit(new CallToRemoteServiceE(i)));
return Observable.merge(f4Observable, f5Observable);
}
});


Which starts to get weird (mergeing them probably isn't what I want...) but allows me to do this at the end, not quite what I want:

f3Observable.subscribe(new Action1<String>() {
@Override
public void call(String s) {
System.out.println("Observed from f3: " + s);
f4And5Observable.subscribe(new Action1<Integer>() {
@Override
public void call(Integer i) {
System.out.println("Observed from f4 and f5: " + i);
}
});
}
});


That gives me:

Observed from f3: responseB_responseA
Observed from f4 and f5: 140
Observed from f4 and f5: 5100


which is all the numbers, but unfortunately I get the results in separate invocations, so I can't quite replace the final println in the original code:

System.out.println(f3.get() + " => " + (f4.get() * f5.get()));


I don't understand how to get access to both those return values on the same line. I think there's probably some functional programming fu I'm missing here. How can I do this? Thanks.

Edit: I was able to solve this with the following. I didn't realize you could flatMap an observable more than once, I assumed results could only be consumed once. So I just flatMap f2Observable twice (sorry, I renamed some stuff in the code since my original post), then zip on all the Observables, then subscribe to that. That Map in the zip to aggregate the values is undesirable because of the type juggling. Other/better solutions and comments welcome! The full code is viewable in a gist. Thank you.

Future<Integer> f2 = executor.submit(new CallToRemoteServiceB());
Observable<Integer> f2Observable = Observable.from(f2);
Observable<Integer> f4Observable = f2Observable
.flatMap(new Func1<Integer, Observable<Integer>>() {
@Override
public Observable<Integer> call(Integer integer) {
System.out.println("Observed from f2: " + integer);
Future<Integer> f4 = executor.submit(new CallToRemoteServiceD(integer));
return Observable.from(f4);
}
});

Observable<Integer> f5Observable = f2Observable
.flatMap(new Func1<Integer, Observable<Integer>>() {
@Override
public Observable<Integer> call(Integer integer) {
System.out.println("Observed from f2: " + integer);
Future<Integer> f5 = executor.submit(new CallToRemoteServiceE(integer));
return Observable.from(f5);
}
});

Observable.zip(f3Observable, f4Observable, f5Observable, new Func3<String, Integer, Integer, Map<String, String>>() {
@Override
public Map<String, String> call(String s, Integer integer, Integer integer2) {
Map<String, String> map = new HashMap<String, String>();
map.put("f3", s);
map.put("f4", String.valueOf(integer));
map.put("f5", String.valueOf(integer2));
return map;
}
}).subscribe(new Action1<Map<String, String>>() {
@Override
public void call(Map<String, String> map) {
System.out.println(map.get("f3") + " => " + (Integer.valueOf(map.get("f4")) * Integer.valueOf(map.get("f5"))));
}
});


And this yields me the desired output:

responseB_responseA => 714000


#### Understanding pattern matching on lists

I've been playing around with Extractors lately and was wondering how the List extractors work especially this:

List(1, 2, 3) match {
case x :: y :: z :: Nil => x + y + z // case ::(x, ::(y, ::(z , Nil)))
}


Ok :: is used in the pattern, so I guess that the compiler now looks up the unapply method in the ::-Object. So tried this:

scala> (::).unapply(::(1, ::(2, Nil)))
res3: Option[(Int, List[Int])] = Some((1,List(2)))


Nice that works. However this does not:

scala> (::).unapply(List(1,2,3))
<console>:6: error: type mismatch;
found   : List[Int]
required: scala.collection.immutable.::[?]
(::).unapply(List(1,2,3))


while this does:

scala> List.unapplySeq(List(1,2,3))
res5: Some[List[Int]] = Some(List(1, 2, 3))


Actually I'm a little puzzled at the moment. How does the compiler choose the right implementation of unapply here.

#### Scala: Pattern matching when one of two items meets some condition

I'm often writing code that compares two objects and produces a value based on whether they are the same, or different, based on how they are different.

So I might write:

val result = (v1,v2) match {
case (Some(value1), Some(value2)) => "a"
case (Some(value), None)) => "b"
case (None, Some(value)) => "b"
case _ = > "c"
}


Those 2nd and 3rd cases are the same really, so I tried writing:

val result = (v1,v2) match {
case (Some(value1), Some(value2)) => "a"
case (Some(value), None)) || (None, Some(value)) => "b"
case _ = > "c"
}


But no luck.

I encounter this problem in a few places, and this is just a specific example, the more general pattern is I have two things, and I want to know if one and only one of them meet some predicate, so I'd like to write something like this:

val result = (v1,v2) match {
case (Some(value1), Some(value2)) => "a"
case OneAndOnlyOne(value, v: Option[Foo] => v.isDefined ) => "b"
case _ = > "c"
}


So the idea here is that OneAndOnlyOne can be configured with a predicated (isDefined in this case) and you can use it in multiple places.

The above doesn't work at all, since its backwards, the predicate needs to be passed into the extractor not returned.

val result = (v1,v2) match {
case (Some(value1), Some(value2)) => "a"
case new OneAndOnlyOne(v: Option[Foo] => v.isDefined )(value) => "b"
case _ = > "c"
}


with:

class OneAndOnlyOne[T](predicate: T => Boolean) {
def unapply( pair: Pair[T,T] ): Option[T] = {
val (item1,item2) = pair
val v1 = predicate(item1)
val v2 = predicate(item2)

if ( v1 != v2 )
Some( if ( v1 ) item1 else item2 )
else
None
}
}


But, this doesn't compile.

Can anyone see a way to make this solution work? Or propose another solution? I'm probably making this more complicated than it is :)

#### Scala 2.8.0.RC2 compiler issue on pattern matching statement?

Why does the following module not compile on Scala 2.8.RC[1,2]?

object Test {

import util.matching.Regex._

val pVoid = """\s*void\s*""".r
val pVoidPtr = """\s*(const\s+)?void\s*\*\s*""".r
val pCharPtr = """\s*(const\s+)GLchar\s*\*\s*""".r
val pIntPtr = """\s*(const\s+)?GLint\s*\*\s*""".r
val pUintPtr = """\s*(const\s+)?GLuint\s*\*\s*""".r
val pFloatPtr = """\s*(const\s+)?GLfloat\s*\*\s*""".r
val pDoublePtr = """\s*(const\s+)?GLdouble\s*\*\s*""".r
val pShortPtr = """\s*(const\s+)?GLshort\s*\*\s*""".r
val pUshortPtr = """\s*(const\s+)?GLushort\s*\*\s*""".r
val pInt64Ptr = """\s*(const\s+)?GLint64\s*\*\s*""".r
val pUint64Ptr = """\s*(const\s+)?GLuint64\s*\*\s*""".r

def mapType(t: String): String = t.trim match {
case pVoid() => "Unit"
case pVoidPtr() => "ByteBuffer"
case pCharPtr() => "CharBuffer"
case pIntPtr() | pUintPtr() => "IntBuffer"
case pFloatPtr() => "FloatBuffer"
case pShortPtr() | pUshortPtr() => "ShortBuffer"
case pDoublePtr() => "DoubleBuffer"
case pInt64Ptr() | pUint64Ptr() => "LongBuffer"
case x => x
}
}


UPDATE 1

After following the advice in the answer, the next issue is that the compilation lasts too long. Interestingly, If I remove 2 of the case statements above I get the follwing compiler error:

object Test {

import util.matching.Regex._

val PVoid = """\s*void\s*""".r
val PVoidPtr = """\s*(const\s+)?void\s*\*\s*""".r
val PCharPtr = """\s*(const\s+)GLchar\s*\*\s*""".r
val PIntPtr = """\s*(const\s+)?GLint\s*\*\s*""".r
val PUintPtr = """\s*(const\s+)?GLuint\s*\*\s*""".r
val PFloatPtr = """\s*(const\s+)?GLfloat\s*\*\s*""".r
val PDoublePtr = """\s*(const\s+)?GLdouble\s*\*\s*""".r
val PShortPtr = """\s*(const\s+)?GLshort\s*\*\s*""".r
val PUshortPtr = """\s*(const\s+)?GLushort\s*\*\s*""".r
val PInt64Ptr = """\s*(const\s+)?GLint64\s*\*\s*""".r
val PUint64Ptr = """\s*(const\s+)?GLuint64\s*\*\s*""".r

def mapType(t: String): String = t.trim match {
case PVoid() => "Unit"
case PVoidPtr() => "ByteBuffer"
case PCharPtr() => "CharBuffer"
case PIntPtr() | PUintPtr() => "IntBuffer"
case PFloatPtr() => "FloatBuffer"
case PShortPtr() | PUshortPtr() => "ShortBuffer"
case PDoublePtr() => "DoubleBuffer"
case PInt64Ptr() | PUint64Ptr() => "LongBuffer"
case x => x
}
}

case d(x) ⇒ println(s"d: $x") }  Feature request: SI-5435 #### Nested Scala matchers why Some(Some(1),1) can't match? It seems that nested matching doesn't work, which is a strange limitation. An example of the behaviour follows: Some(Some(1),2) match { | case Some(Some(a),b) => a | case e => e | } <console>:9: error: wrong number of arguments for <none>: (x: (Some[Int], Int))Some[(Some[Int], Int)] case Some(Some(a),b) => a ^ <console>:9: error: not found: value a case Some(Some(a),b) => a ^  This works: Some(Some(1),2) match { case Some(a) => a match { case (Some(a),b) => "yay" case e => "nay" } }  Now, am I just being a twit or is there a better way to achieve this? ### /r/netsec #### Myths about /dev/urandom [x-post /r/linux_programming] ### TheoryOverflow #### Where does randomness help when deciding algebraic geometry over$\mathbb{C}$? If we have a single straight line program expressing a multivariate polynomial equation with integer coefficients, the Schwartz-Zippel lemma gives a simple randomized algorithm for deciding whether the equation is always true. We can similarly decide if a single polynomial inequation is always true over$\mathbb{C}$, since$p(z_1,\ldots,z_n) \ne 0$for all$z_i$iff the polynomial is a nonzero constant. Does the simplicity of randomized checking extend to any systems of multivariate polynomial equations? In particular, is there a simple algorithm for deciding whether $$p(z_1,\ldots,z_n) = 0 \implies q(z_1,\dots,z_n) = 0$$ where$p,q$are straight line programs over$\mathbb{C}$with integer coefficients? I'm fairly sure randomness does not provide simple algorithms if we go to arbitrary systems of equations and inequations, but I'm curious where the boundary is between easy and hard. ### StackOverflow #### F-Sharp (F#) untyped infinity I wonder why F-Sharp doesn't support infinity. This would work in Ruby (but not in f#): let numbers n = [1 .. 1/0] |> Seq.take(n)  -> System.DivideByZeroException: Attempted to divide by zero. I can write the same functionality in much complex way: let numbers n = 1 |> Seq.unfold (fun i -> Some (i, i + 1)) |> Seq.take(n)  -> works However I think that first one would be much more clear. I can't find any easy way to use dynamically typed infinity in F#. There is infinity keyword but it is float: let a = Math.bigint +infinity;;  System.OverflowException: BigInteger cannot represent infinity. at System.Numerics.BigInteger..ctor(Double value) at .$FSI_0045.main@() stopped due to error

Edit: also this seems to work in iteration:

let numbers n = Seq.initInfinite (fun i -> i+1) |> Seq.take(n)


### Planet Clojure

#### Recommended Clojure Use-Case Read : Greenius Stack

Another day, another Clojure use-case.  This is an interesting summary of the use of Clojure, Datomic and more technologies.

Go to more posts on clojure or that are clojure-related at http://digitalcld.com/cld/category/clojure.

### StackOverflow

#### Add element to scala ListBuffer in java

If it is possible what would be the easiest way of adding an element to a scala ListBuffer from within java.

this is the scala ListBuffer (in scala)

var newModVersion: ListBuffer[NewModVersionEntry] = new ListBuffer[NewModVersionEntry]()


and this is what I want to add (in java)

XplosionCoreBL.newModVersion().add(new NewModVersionEntry(name, latestVersion, latestMCVersion, isCritical, description));


#### Provide multiple implementations for a Clojure protocol

I have a namespace that exposes common data-related functions (get-images, insert-user). I then have two database backends that have those same functions and implement them differently. They implement the interface as it were. Each backend is contained within a namespace.

I can't seem to be able to find a good solution on how to accomplish this.

I tried dynamically loading the ns but no luck. Once you do (:require [abc :as x]), the x isn't a real value.

I tried using defprotocol and deftype but that's all kinds of weird because the functions in the deftype need to be imported, too and that messes everything up for me.

Is there some idiomatic solution to this?

#### Does Scala have a function application operator?

F# has the pipeline operators:

arg |> func // or arg2 |> func arg1, as opposed to func arg1 arg2
func <| arg


Haskell has the $ operator: func$ arg -- or func1 $func2 arg, as opposed to func1 (func2 arg)  They're mostly used to increase readability by de-cluttering the function calls. Is there a similar operator in Scala? ### CompsciOverflow #### What is the evidence that that types are more basic specifications, and specifications are more detailed types? In the book Type Theory and Functional Programming [Thompson, S 1999] the author explains the relationship between specifications, types and proofs of functions: The equivalent specifications can be thought of as suggesting different program development methods: using the ∃∀ form, we develop the function and its proof as separate entities, either separately or together, whilst in the ∀∃ form we extract a function from a proof, post hoc. This analysis of specifications makes it clear that when we seek a program to meet a specification, we look for the first component of a member of an existential type; the second proves that the program meets the constraint part of the specification. On this same topic, the commenter writes: Specifications are in a way "more detailed" types. Or, state the other way, types are more basic specifications. Martin-Lof type theory is precisely about fusing the two ideas into one. My question is: What is the evidence that that types are more basic specifications, and specifications are more detailed types? ### StackOverflow #### How does for work in this recursive Clojure code? Clojure beginner here. Here's some code I'm trying to understand, from http://iloveponies.github.io/120-hour-epic-sax-marathon/sudoku.html (one page of a rather nice beginning Clojure course): Subset sum is a classic problem. Here’s how it goes. You are given: a set of numbers, like #{1 2 10 5 7} and a number, say 23 and you want to know if there is some subset of the original set that sums up to the target. We’re going to solve this by brute force using a backtracking search. Here’s one way to implement it: (defn sum [a-seq] (reduce + a-seq)) (defn subset-sum-helper [a-set current-set target] (if (= (sum current-set) target) [current-set] (let [remaining (clojure.set/difference a-set current-set)] (for [elem remaining solution (subset-sum-helper a-set (conj current-set elem) target)] solution)))) (defn subset-sum [a-set target] (subset-sum-helper a-set #{} target)) So the main thing happens inside subset-sum-helper. First of all, always check if we have found a valid solution. Here it’s checked with (if (= (sum current-set) target) [current-set] If we have found a valid solution, return it in a vector (We’ll see soon why in a vector). Okay, so if we’re not done yet, what are our options? Well, we need to try adding some element of a-set into current-set and try again. What are the possible elements for this? They are those that are not yet in current-set. Those are bound to the name remaining here: (let [remaining (clojure.set/difference a-set current-set)] What’s left is to actually try calling subset-sum-helper with each new set obtainable in this way: (for [elem remaining solution (subset-sum-helper a-set (conj current-set elem) target)] solution)))) Here first elem gets bound to the elements of remaining one at a time. For each elem, solution gets bound to each element of the recursive call solution (subset-sum-helper a-set (conj current-set elem) target)] And this is the reason we returned a vector in the base case, so that we can use for in this way.  And sure enough, (subset-sum #{1 2 3 4} 4) returns (#{1 3} #{1 3} #{4}). But why must line 3 of subset-sum-helper return [current-set]? Wouldn't that return a final answer of ([#{1 3}] [#{1 3}] [#{4}])? I try removing the enclosing brackets in line 3, making the function begin like this: (defn subset-sum-helper [a-set current-set target] (if (= (sum current-set) target) current-set (let ...  Now (subset-sum #{1 2 3 4} 4) returns (1 3 1 3 4), which makes it look like let accumulates not the three sets #{1 3}, #{1 3}, and #{4}, but rather just the "bare" numbers, giving (1 3 1 3 4). So subset-sum-helper is using the list comprehension for within a recursive calculation, and I don't understand what's happening. When I try visualizing this recursive calculation, I found myself asking, "So what happens when (subset-sum-helper a-set (conj current-set elem) target)  doesn't return an answer because no answer is possible given its starting point?" (My best guess is that it returns [] or something similar.) I don't understand what the tutorial writer meant when he wrote, "And this is the reason we returned a vector in the base case, so that we can use for in this way." I would greatly appreciate any help you could give me. Thanks! #### libzmq not found by clrzmq in Xamarin Studios/C# application I'm using Xamarin Studio on a Mac, with clrzmq included via NuGet. clrzmq references on libzmq.dll. My app compiles fine, but when I try to run it, I get this: Unhandled Exception: System.DllNotFoundException: libzmq at (wrapper managed-to-native) ZMQ.C:zmq_init (int) at ZMQ.Context..ctor (Int32 io_threads) [0x00000] in <filename unknown>:0 at FeatureSpike.MainClass.Main (System.String[] args) [0x00000] in <filename unknown>:0  libzmq.dll is definitely there in the build target directory. Does anyone know why it's not being found? ### Lobsters #### Hardware Scrambling – No More Password Leaks ### StackOverflow #### Is there a hotkey for searching for references in IntelliJ when using Scala? Is there a hotkey for searching for references in IntelliJ when using Scala? In Eclipse, when using the hotkey CTRL+SHIFT+G on a name, a search for references starts. This is very usefull to find where a certain method is used. Is there anything similiar in IntelliJ when using Scala ? ### /r/compsci #### Propositions as Types (Philip Wadler) ### StackOverflow #### Play Framework: Count how many times a key exists in a JSON tree and how many times is set to a certain value Given the following JSON... { "firstName": "Joe", "lastName": "Grey", ... "addresses": [ { "name": "Default", "street": "...", ..., "isDefault": true }, { "name": "Home", "street": "...", ..., "isDefault": false }, { "name": "Office", "street": "...", ..., "isDefault": false } ] }  How do I count how many times isDefault is set to false in Scala? ### Dave Winer #### My templates are open source As I was working on Fargo 2, I got more comfortable with GitHub, and did something I've always wanted to do -- I released the templates for the core types in as open source. I hope at some point (now?) this will help get a design community booted up around Fargo. Why? Well I'm all thumbs when it comes to CSS. But I love the results that great designers can produce with it. Fargo is a design platform, and the templates, released as open source, can make that work as a real community. For example, I had trouble making Disqus comments work with the medium template, so I tabled it. I never have gotten back to it. Maybe someone else has the patience or know-how to make it work. To make this work, you have to learn how to use Fargo as a CMS. And that's another benefit. If we can have a platform that's equally comfortable for writers, programmers and designers, then we really have something. This is something we had working well in the early blogging communities, with my own Manila and Radio, as well as with Blogger, Movable Type, Tumblr. We can have it again, with the benefits of modern browsers and servers. The templates are released in OPML and plain text. If you want to edit them in Fargo, of course you'd use the OPML. If you figure out a way to make it work when editing as plain text, more power to you, I still want to share. Also this may seem like gibberish today, but I hope in a few months it'll seem easy. ### CompsciOverflow #### Representative tree from sets of decision trees I built a set of samples from an imbalanced dataset with two classes through the undersampling technique. Now, from that set of decision trees I would like to choose one representative tree. Is there any algorthm to do that? ### StackOverflow #### Add safeGet method to Sized from Shapeless I set about adding a safeGet method to Sized because I felt it was what would be best for some client code I am working on. I was able to get it to work with an awful hack that works in my case but not in the general case. This is the method signature and definition: def safeGet(m : Nat)(implicit diff : Diff[L, Succ[m.N]], ev : ToInt[m.N]) = r(toInt[m.N])  However, there are a few additional hacks that must be performed for this to work because Traversable does not define the apply method. Here is the full (small) diff: https://github.com/jedesah/shapeless/commit/ab52185bec7463f54a040e7857cba7c5758fe46e Anyone have any ideas how I could get the same result in a more appropriate way? I feel like there would have to be an IsGenSeqLike in the standard library for this to be done correctly. Then I could leave Traversable alone and only define safeGet on Seq. ### DragonFly BSD Digest #### BSDNow vs. BSDTalk Episode 27 of BSDNow is an interview with Will Backman of BSDTalk. It is unfortunately a straight-ahead interview, and not an Epic Rap Battle. ### StackOverflow #### Paredit doesn't remove right paren on paredit-backward-kill-word The buffer is "(|)". On Alt+Backspace which sends paredit-backward-kill-word, it only removes the left paren and leaves the buffer as "|)". I thought it was a bug in Paredit or Emacs. But Alt+Backspace works exactly the same way in Clojure editor in IntelliJ IDEA. It made me think - is it a feature in Paredit? What's the point? ### CompsciOverflow #### Recurrence by Substitution Method [on hold] I have been trying to find an example similar to this one with no luck. Any suggestions on where to begin with this question will be helpful. #### Every language that is reducible to a language in$\Sigma_i^p$is also in$\Sigma_i^p$. How? The complexity class$\Sigma_{k}^{p}is recursively defined as follows: \begin{align} \Sigma_{0}^{p} & := P, \\ \Sigma_{k+1}^{p} & := P^{\Sigma_{k}^{p}}. \end{align} Why is every language that is reducible to a language in\Sigma_i^p$also in$\Sigma_i^p$? This comes in the proof of the theorem: If there is a PH-complete problem, then PH (the polynomial hierarchy) collapses. ### High Scalability #### Stuff The Internet Says On Scalability For March 7th, 2014 Hey, it's HighScalability time: Twitter valiantly survived an Oscar DDoS attack by non-state actors. • Several Billion: Apple iMessages per Day along with 40 billion notifications and 15 to 20 million FaceTime calls. Take that WhatsApp. Their architecture? Hey, this is Apple, only the Shadow knows. • 200 bit quantum computer: more states than atoms in the universe; 10 million matches: Tinder's per day catch;$1 billion: Kickstarter's long tail pledge funding achievement
• Quotable Quotes:
• @cstross: Let me repeat that: 100,000 ARM processors will cost you a total of $75,000 and probably fit in your jacket pocket. • @openflow: "You can no longer separate compute, storage, and networking." -- @vkhosla #ONS2014 • @HackerNewsOnion: New node.js co-working space has 1 table and everyone takes turns • @chrismunns: we're reaching the point where ease and low cost of doing DDOS attacks means you shouldn't serve anything directly out of your origin • @rilt: Mysql dead, Cassandra now in production using @DataStax python driver. • @CompSciFact: "No engineered structure is designed to be built and then neglected or ignored." -- Henry Petroski • Arundhati Roy: Revolutions can, and often have, begun with reading. • Brett Slatkin: 3D printing is to design what continuous deployment is to code. • Well Facebook got on that right quick: Facebook wants to use drones to blanket remote regions with Internet. We talked about a drone driven Internet back in January. This is good news IMHO. Facebook will have the resources to make this really happen. Hopefully. Maybe. Cross your fingers. • A vast hidden surveillance network runs across America, powered by the repo industry. This intelligence database was powered by individuals driving around and taking pictures of licence plates to track cars. Imagine how Google Glass will enable the tracking of people, without any three letter government agencies in the loop. Crowdsourcing is fun! • Francis Bacon way back in the 1700s was all over BigData with his ant, spider, and honey bee analogy: Good scientists are not like ants (mindlessly gathering data) or spiders (spinning empty theories). Instead, they are like bees, transforming nature into a nourishing product. This essay examines Bacon's "middle way" by elucidating the means he proposes to turn experience and insight into understanding. The human intellect relies on "machines" to extend perceptual limits, check impulsive imaginations, and reveal nature's latent causal structure, or “forms.” Don't miss all that the Internet has to say on Scalability, click below and become eventually consistent with all scalability knowledge... ### StackOverflow #### Regular expressions in scala I am relatively new to Scala and I have been working on some school projects and it was all good :) , but right now I'm stuck with a regular expression validator. This program has to find out if a string can be generated by a regular expression, for instance: "ad" can be generated by the regular expression "(a|b)(d)". In my implementation this works like this: def exp: Exp = Concat(Opcion(Simple("a"), Simple("b")),Simple("d")) exp.eval("ad") // returns true  I'd like to know if there is a better way to accomplish my objective in the functional way. The current implementation is highly inefficient, and also when trying to program kleene closures and positive closures this become very complex. This is what I have done so far: abstract class Exp{ def eval(exp: String): Boolean = this match{ case Simple(a) => a == exp case Opcion(a, b) => a.eval(exp) || b.eval(exp) case Concat(a, b) => a.concat(b).eval(exp) case COpcional(a) => Opcion(Simple(""), a).eval(exp) } private def concat(exp: Exp): Exp = this match{ case Simple(a) => exp match{ case Simple(b) => Simple(a + b) case _ => Concat(Simple(a), exp) } case Opcion(a, b) => Opcion(a.concat(exp), b.concat(exp)) case Concat(a, b) => a.concat(b.concat(exp)) case COpcional(a) => Opcion(Simple("").concat(exp), a.concat(exp)) } } case class Simple(valor: String) extends Exp case class Opcion(e1: Exp, e2: Exp) extends Exp case class Concat(e1: Exp, e2: Exp) extends Exp case class COpcional(e1: Exp) extends Exp case class CPositiva(e1: Exp) extends Exp case class CKleene(e1: Exp) extends Exp  ### kuro5hin #### Solution to the Borker Problem As you may know when I'm not manic, I work tirelessly to grandiose goals with no hope of fulfilling them. Why just the other day while I was furriously masterbating at the #SALMONCREEK #STARBUCKS mens room, I temporarly lost vision as I came, and saw a vision. It was G-d all mighty himself, and he commanded me to take care of the Borker* problem. *by borker, I mean broker, but I can't bring myself to call them by their actual names. ### Lambda the Ultimate #### Propositions as Types Propositions as Types, Philip Wadler. Draft, March 2014. The principle of Propositions as Types links logic to computation. At first sight it appears to be a simple coincidence---almost a pun---but it turns out to be remarkably robust, inspiring the design of theorem provers and programming languages, and continuing to influence the forefronts of computing. Propositions as Types has many names and many origins, and is a notion with depth, breadth, and mystery. Philip Wadler has written a very enjoyable (Like busses: you wait two thousand years for a definition of “effectively calculable”, and then three come along at once) paper about propositions as types that is accessible to PLTlettantes. ### /r/compsci #### Anyone good with figuring out digital logic circuits? http://i.imgur.com/oiF53hN.png I believe that the 4 circuits are D flip flops because there is a clock signal..? However I am not sure how to figure this out. Is the "start" an enable or something, I am confused on what it is suppose to do and how it affects the whole circuit? submitted by badAtMathing [link] [8 comments] ### Lobsters #### Handling and Processing Strings in R ### StackOverflow #### Scala + Android IDE I am an eclipse user and android developer. I am trying to develop android applications using scala. I managed to do an hello world once in eclipse and now I am trying to do it with Intellij IDEA, so I can choose the best option. I managed to run simple scala examples in Intellij and I am trying now to create an android application there. The problem is: Eclipse seems a lot more EASY to build an android application with scala than Intellij. For what I have seen you need to use SBT and install a lot of "extras" to get things done. I have seen some questions here about IDEs for Scala. But didn't find a recent question about the combo Scala + Android. So, my question is... Should I give Intellij a try or eclipse is just fine? Is it possible to run the application on my smartphone easly with Intellij? ### TheoryOverflow #### equivalent way(s) of expressing P=?NP problem in linear programming? the paper "In defense of the Simplex Algorithm's worst-case behavior" Disser/Skutella [1] was recently cited on this tcs.se site by saeed on another interesting question. the paper introduces the idea of "NP mighty" algorithms (p3 def2). it follows a fruitful/continuing line of research analyzing P$\stackrel{?}{=}$NP wrt the simplex algorithm and linear programming of which there have been other major recent advances, eg results by Pokutta et al in [2] showing that the P-time TSP polytope must have an "unlikely/restrictive" form (commentaries in Barriers to P/NP proofs, RJLipton, also Stating P$\stackrel{?}{=}$NP without TMs). question (possibly with multiple leading answers): the Disser/Skutella paper has closely related ideas but does not seem to explicitly reformulate the P$\stackrel{?}{=}$NP question. what is an equivalent way to state/study it in their introduced schema/framework of "NP Mighty" algorithms? what is a/the basic open problem in simplex/linear programming complexity theory that is equivalent to P$\stackrel{?}{=}$NP? (somewhat related question: the Disser/Skutella paper also refers to Klee-Minty cubes, long used to show worst-case behavior on the simplex algorithm. are there any results relating lower bounds on them to general algorithmic lower bounds and/or complexity class separations eg P$\stackrel{?}{=}$NP etc?) [1] "In defense of the Simplex Algorithm's worst-case behavior" Dissker/Skutella [2] Exponential Lower Bounds for Polytopes in Combinatorial Optimization Fiorini et al ### StackOverflow #### Max age is not working in Play Framework 2.2.0 (Scala) I am using Play Framework 2.2.0 with Scala. I want to expire the session after a fix time, so i put this code in conf/application.conf application.session.maxAge=1h  but it is not working. Is there any way to set a maximum age for session in application.conf or by overriding any method from controller.If i want to expire session after 50 seconds would i have to code like this application.session.maxAge=50sec  Thanks for replies #### how to make scalatest generate html report through sbt The way to do this for specs2 based test in sbt is (testOptions in Test) += Tests.Argument(TestFrameworks.Specs2, "html")  but how about scalatest? I've done lots of Google search, but cannot find a good explanation/solution. #### Check if only specific properties of 2 objects are equal Say I have a a class Person (assume all of the properties can be set and can also be null): public class Person { private String firstName; private String secondName; private Address address; public String getFirstName() { return firstName; } public String getSecondName() { return secondName; } public String getAddress() { return address; } }  If I have two instances of Person, and I want to check if they both have the same firstName and secondName, I cannot simply call equals() as that will also check address for equality (and any other properties for that matter). I would have to write a function like this: boolean areNamesEqual(Person person1, Person person2) { if(person1 == null || person2 == null) { return person1 == person2; } if(person1.getFirstName() != null ? !person1.getFirstName().equals(person2.getFirstName()) : person2.getFirstName() != null) { return false; } if(person1.getSecondName() != null ? !person1.getSecondName().equals(person2.getSecondName()) : person2.getSecondName() != null) { return false; } return true; }  Is there any cleaner way to express this? It feels like Java is making jump through quite a few hoops to express this simple idea. I have started looking at Google Guava, and have seen I can use Objects.equal() to improve matters: boolean areNamesEqual(Person person1, Person person2) { if(person1 == null || person2 == null) { return person1 == person2; } if(Objects.equal(person1.getFirstName(), person2.getFirstName())) { return false; } if(Objects.equal(person1.getSecondName(), person2.getSecondName())) { return false; } return true; }  But I still have to check for the Person objects themselves being null, and write getFirstName() and getSecondName() twice each. It feels like there must be a better way expressing this. Code like this would be ideal: (person1, person2).arePropertiesEqual(firstName, secondName)  With this I don't have check for null anywhere, I don't have to return early, and I don't have to write firstName or secondName more than once. Any ideas? #### Could/should an implicit conversion from T to Option[T] be added/created in Scala? Is this an opportunity to make things a bit more efficient (for the prorammer): I find it gets a bit tiresome having to wrap things in Some, e.g. Some(5). What about something like this: implicit def T2OptionT( x : T) : Option[T] = if ( x == null ) None else Some(x)  ### TheoryOverflow #### Liveness constraints as monotone functions I have two question regarding an example from Michael I. Schwartzbach's lecture notes on Static Analysis. The paper defines and describes some properties of lattices and then uses static analysis to determine liveness. In the example, the program is transformed into a CFG, and a set of constraint functions are derived for each CFG node, and then the constraint equations are solved to produce the liveness for each node. I am having trouble bridging the gap between lattices and the liveness example. I believe the lattice is the powerset of the program's variables, but what is the partial order relation: is it the CFG? Also, why are the constraint functions considered monotone? ### Portland Pattern Repository #### Edwin Watkeys (by 207.239.61.34 31 hours ago) ### StackOverflow #### Implicit conversion method in companion object needs to be imported? Contradiction with "Scala for the impatient" book The code below does not work but it should according to the "Scala for the impatient" book (please see excerpt below). So what do I not understand here ? Did the rules for implicit conversion change in recent versions of Scala (2.8 vs. 2.10) ? EDIT: After looking at this question I realised that b.hello needs to be changed to (b:A).hello. That is not very implicit. Is there any way around this? EDIT 2: After reading some more this it seems that there is no other way than import. object A { implicit def b2a(b:B)=new A } class A{ def hello=println("hello") } class B object ImplicitConversion extends App{ val b=new B b.hello }  #### How to solve connect timeout exception issue in play framework 2.2.1 I am calling a web service in play framework with scala. The code is following Producer/Consumer pattern. Each call to WS takes about 2 seconds. But many such calls are made, that exceed 120 seconds (which is default timeout in play). Hence it throws an exception: java.net.connectException after exactly 120 secs. Questions: 1. Why are time of all calls being added up, rather than treating them individually and hence timeout would not be an issue. 2. I tried one solution of increasing timeout by solving this issue: fixed ws.timeout . But for me the issue still exists. 3. Is it a problem of thread or concurrency ? Here is code of class: class WS(sentenceList: List[String], queue: BlockingQueue[Future[Response]], filename: String) { val listofJson = new ListBuffer[(String, JsValue)] listofJson.clear def callWSProducer() = { sentenceList.foreach { name => val data = Json.obj( "input_sent" -> name, "Filename" -> filename) val holder: Future[Response] = WS.url("http://0.0.0.0:8015/endpoint/").withHeaders("Content-Type" -> "application/json").post(data) implicit val context = scala.concurrent.ExecutionContext.Implicits.global queue.put(holder) } } def WSConsumer(): List[(String, JsValue)] = { sentenceList.foreach { name => val result = Await.result(queue.take(), 100.second) val out = (result.json \ "sentence"); listofJson += ((name, out)); } return listofJson.toList } }  Error i am getting in console: error.txt EDIT: Let me make the question a little clearer. Firstly, the functions above are called from the the controller (main thread) by creating an object of the above class. The Json list above is returned to the controller, which in turn returns it to the view. Because we have to return the list, the only possible way to do it that we could come up with is using the await (blocking) mechanism. I know there are threading issues with the code, but could someone at least point those out. All the methods we have tried, either lead to the 120 second timeout mentioned above or the 100 second future timeout when there is some kind of deadlock in our await block, like when we use a solution similar to one mentioned here: Scala Play Resolve a list of futures ### Lobsters #### LISP- a Language for Internet Scripting and Programming ### StackOverflow #### Use case for WebDriver (https://github.com/huntc/webdriver) to give instructions to any browser I'm working on a web scraper using WebDriver and I found this project built in Scala, Spray and Akka (https://github.com/huntc/webdriver), until now, and due to the lack of documentation, I haven't figure out how to send instructions to any browser (I rather use PhantomJs) to do something like: 1. Make a request to an URL. 2. Execute javascript somehow. 3. Inject jQuery (If possible). Here's the Main.scala code provided in the repo for reference: package com.typesafe.webdriver.tester import akka.actor.{ActorRef, ActorSystem} import akka.pattern.ask import akka.pattern.gracefulStop import com.typesafe.webdriver.{Session, PhantomJs, LocalBrowser} import akka.util.Timeout import scala.concurrent.duration._ import scala.concurrent.ExecutionContext.Implicits.global import spray.json._ import scala.concurrent.{Future, Await} object Main { def main(args: Array[String]) { implicit val system = ActorSystem("webdriver-system") implicit val timeout = Timeout(5.seconds) system.scheduler.scheduleOnce(7.seconds) { system.shutdown() System.exit(1) } val browser = system.actorOf(PhantomJs.props(), "localBrowser") browser ! LocalBrowser.Startup for ( session <- (browser ? LocalBrowser.CreateSession).mapTo[ActorRef]; result <- (session ? Session.ExecuteJs("return 5+5", JsArray(JsNumber(123)))).mapTo[JsNumber] ) yield { println(result) try { val stopped: Future[Boolean] = gracefulStop(browser, 1.second) Await.result(stopped, 2.seconds) System.exit(0) } catch { case _: Throwable => } } } }  ### TheoryOverflow #### Complexity of Haemers' minimum rank In 1978 Willem H. Haemmers published "An upper bound on the Shannon capacity of a graph". Tims has a survey of more recent results his thesis. What is the computational complexity of computing Haemmer's minimum rank function? ### QuantOverflow #### How is historical data for forex collected or computed? I'm looking at four sources of forex data, as compiled in the question, What data sources are available online? And I think I must be misunderstanding something, perhaps something fundamental, but I'm not sure what. Given my ignorance, it's hard to pin down my confusion as a single question, so I'll express a few, in hopes someone can pick up on the source of my confusion and shed light. • The tick data from DukasCopy shows 5 columns: Time, Ask, Bid, AskVolume, and BidVolume. Am I correct to believe that this data shows no information about actual executions—only changes to the quotes? (Am I correct to believe that the AskVolume and BidVolume columns describe only the quantity offered at the Ask and Bid prices, respectively?) • The 1-second data from DukasCopy shows 6 columns: Time, Open, High, Low, Close, and Volume. Am I correct to believe that this data does show information about actual executions, i.e. Volume being the quantity traded in the given time period of 1 second? • OANDA only provides daily and weekly data, the forexforums.com links only provide 1-minute data (resembling DukasCopy's 1-second data), and GAIN Capital seems to provide tick data (resembling DukasCopy's tick data) without the bid and ask volumes. I have looked at many other sources as well, but can't seem to find tick data on trades, i.e. the time, price, and quantity at every trade. Am I looking for something that doesn't actually exist? If so, then why does it not exist? • I suspect it could have something to do with the non-centralized nature of the forex markets. But then, what exactly does this historical data, e.g. DukasCopy's, mean? Do these numbers only represent the quotes (tick data) and trades (1-second data) handled by DukasCopy (who I understand to be a broker)? Or, do they indeed come from some centralized aggregation of quotes and/or trades? I have looked for explanations regarding the data on each of the four websites, but to no avail. I apologize for asking such a basic question; I'm very new to forex, and I'm surprised by how different it is from the the NYSE/NASDAQ markets, which I'm slightly more familiar with. ### /r/emacs #### lispy.el: "xi"= cond-to-ifs, "xc" = ifs-to-cond. Refactor while preserving whitespace. ### Lobsters #### What Every C Programmer Should Know About Undefined Behavior ### StackOverflow #### Implementing Sequence-Inference in Clojure using Method of Differences I read that in Haskell, you could create a sequence like this: [1,3..9] I wrote a version in Clojure, and though I liked programming without state, the time complexity is huge. Can I speed up my code without having to maintain state? How? Edit: If you're interested in understanding the solution, you can read my blog post. Use cases: (infer-n [1 2] 10) => [1 2 3 4 5 6 7 8 9 10] (infer-n [1 4 9] 10) => [1 4 9 16 25 ... 100] (infer-range [9 7] 1) => [9 7 5 3 1]  Code: (defn diffs "(diffs [1 2 5 12 29]) => (1 3 7 17)" [alist] (map - (rest alist) alist)) (defn const-diff "Returns the diff if it is constant for the seq, else nil. Non-strict version." [alist] (let [ds (diffs alist) curr (first ds)] (if (some #(not (= curr %)) ds) nil curr))) (defn get-next "Returns the next item in the list according to the method of differences. (get-next [2 4]) => 6" [alist] (+ (last alist) (let [d (const-diff alist)] (if (= nil d) (get-next (diffs alist)) d)))) (defn states-of "Returns an infinite sequence of states that the input sequence can have. (states-of [1 3]) => ([1 3] [1 3 5] [1 3 5 7] [1 3 5 7 9]...)" [first-state] (iterate #(conj % (get-next %)) first-state)) (defn infer-n "Returns the first n items from the inferred-list. (infer-n [1 4 9] 10) => [1 4 9 16 25 36 49 64 81 100]" [alist n] (take n (map first (states-of alist)))) (defn infer-range "(infer-range [10 9] 1) => [10 9 8 7 6 5 4 3 2 1]" [alist bound] (let [in-range (if (>= bound (last alist)) #(<= % bound) #(>= % bound))] (last (for [l (states-of alist) :while (in-range (last l))] l))))  ### /r/netsec #### skiddie_trapper.js It's a javascript project to set up trap for script kiddies. ### StackOverflow #### How can I scala-ify this block: Scala newbie here... How can I scalify this block: if(sess != null) { sess.any = params.get("any").getOrElse("") sess.name = params.get("name").getOrElse("") sess.entity = params.get("entity").getOrElse("") sess.tin = params.get("tin").getOrElse("") sess.tintype = params.get("tintype").getOrElse("") sess.bdate = params.get("bdate").getOrElse("") sess.addr = params.get("addr").getOrElse("") sess.city = params.get("city").getOrElse("") sess.state = params.get("state").getOrElse("") sess.zip = params.get("zip").getOrElse("") }  sess is just an instance of a case class. ### TheoryOverflow #### Complexity of factorial exponent over composite moduli I know that computing factorial modulo a composite number has no fast algorithm and showing non-polylogarithmic lower bound in BSS model for factorial would separate P from NP in that model. Given$a\in\Bbb Z/n\Bbb Z$, where$n$is composite, what is the complexity of calculating$a^{m!}$in$\Bbb Z/n\Bbb Z$for any given integer$n>m>0$? ### StackOverflow #### Printing GET and POST value tables in SCALA template in PlayFramework Please help me. Can You tell me how to print values from form post and get in scala template in play framework #### Scala/Play: javax.xml.soap request header Content-Type issue I've got this simple call to a SOAP API in my Scala/Play application: import javax.xml.soap._ object API { def call = { val soapConnectionFactory = SOAPConnectionFactory.newInstance val soapConnection = soapConnectionFactory.createConnection val url = "http://123.123.123.123" val soapResponse = soapConnection.call(createSOAPRequest, url) soapConnection.close } def createSOAPRequest = { val messageFactory = MessageFactory.newInstance val soapMessage = messageFactory.createMessage val soapPart = soapMessage.getSOAPPart val serverURI = "http://some.thing.xsd/" val envelope = soapPart.getEnvelope envelope.addNamespaceDeclaration("ecl", serverURI) val soapBody = envelope.getBody val soapBodyElem = soapBody.addChildElement("TestRequest", "ecl") soapBodyElem.addChildElement("MessageID", "ecl").addTextNode("Valid Pricing Test") soapBodyElem.addChildElement("MessageDateTime", "ecl").addTextNode("2012-04-13T10:50:55") soapBodyElem.addChildElement("BusinessUnit", "ecl").addTextNode("CP") soapBodyElem.addChildElement("AccountNumber", "ecl").addTextNode("91327067") val headers = soapMessage.getMimeHeaders headers.setHeader("Content-Type", "application/json; charset=utf-8") headers.addHeader("SOAPAction", serverURI + "TestRequest") headers.addHeader("Authorization", "Basic wfewefwefwefrgergregerg") println(headers.getHeader("Content-Type").toList) soapMessage.saveChanges soapMessage }  The println outputs the right Content-Type header that I've set: List(application/soap+xml; charset=utf-8)  But the remote SOAP API that I'm calling responds with 415: Bad Response; Cannot process the message because the content type 'text/xml; charset=utf-8' was not the expected type 'application/soap+xml; charset=utf-8'.  I've checked the request being sent with wireshark and indeed, the Content-Type header is wrong: Content-Type: text/xml; charset=utf-8  Why is the content type I set being ignored in this case and what do I do to fix it? UPDATE: I think I'm on to something here: A SOAPPart object is a MIME part and has the MIME headers Content-Id, Content-Location, and Content-Type. Because the value of Content-Type must be "text/xml", a SOAPPart object automatically has a MIME header of Content-Type with its value set to "text/xml". The value must be "text/xml" because content in the SOAP part of a message must be in XML format. Content that is not of type "text/xml" must be in an AttachmentPart object rather than in the SOAPPart object. source Just need to figure out how to change my code to match this. UPDATE2: SOLVED Just needed to change 1 row to indicate that this is SOAP 1.2: val messageFactory = MessageFactory.newInstance(SOAPConstants.SOAP_1_2_PROTOCOL)  ### /r/dependent_types #### Introduction a la theorie homotopique des types (French PDF) #### Type Theory and Constructive Mathematics (PDF, slides) #### A small remark on two different formulations of the elimination rule for the identity type (PDF) ### Dave Winer #### Newsweek's breakthrough Yesterday started with a breakthrough -- Newsweek, the beleaguered, mostly forgotten news pub, had done a bit of investigative journalism, and had found the great Satoshi Nakamoto. The virtuoso technologist, sociologist and economist who came up with BitCoin. Strike one for journalism! They still have the right stuff. By the end of the day -- the breakthrough was a black eye. The good news was now bad news for Newsweek and for journalism. This, in all likelihood, was not the man. Not only was Newsweek wrong, they were spectacularly wrong, wrong on the cover, using up one of their last bits of credibility. From now on will Newsweek be anything but a punchline? But what does this say about investigative journalism in the future, when the rest of us can quickly evaluate the cover of Newsweek and find it lacking? And how many wrong stories of the past stood, because there was no Internet to expose them? In my own field, I can tell you that most of the stories Newsweek ran about tech were based on huge omissions. Their typical tech story was about a titan fighting to control the future, when the truth is they were fighting a losing battle against obsolescence. But the MSM never reports this, because they have too much regard for money and many of them want jobs working for the moguls they write about. We still hear it, in the last gasps of 20th century journalism -- in the stories we don't read about moguls who are starting news companies (the reporters don't write them). They still worship money, when all money can do is buy you big houses, cars and spouses, lots of them, sports teams, and offices full of journalists. ### UnixOverflow #### Trap: can't su as root, can't change group to wheel, ssh as root prohibited Is this a trap? I made these steps in FreeBSD 10: 1) ssh as root prohibited 2) logged as user 3) su as root 4) as root chsh changed name of user "user" to "luser" 5) exit from root And from this moment I can't su root cause the luser is not in the wheel group and I can't change group in /etc/group because I have no privileges for doing that. What can I do to login as root ? ### /r/netsec #### hashID.py: awesome hash indentifier ### Lobsters #### zip Code: Unpacking Data Compression @silentbicycle’s presentation from Strange Loop. Comments ### CompsciOverflow #### Planar graphs.,running time problem [on hold] The maximum number of edges a planar graph can have 3|V|-6,and if i use a adjacency list I can find the algorithm in O|V+E| time. How ever if I use the adjacency matrix I get O(v^2/4) time. can anyone please give me proof for this.I understand the adjacency list part.Thankyou ### StackOverflow #### Class#getInterfaces() and Class#getGenericInterfaces() return arrays of differing length If you take the class scala.runtime.AbstractPartialFunction from Scala 2.10.2 (I did not check other versions) and compare the output of AbstractPartialFunction.class.getInterfaces() and AbstractPartialFunction.class.getGenericInterfaces() you may notice, that the results do not match. The generic Interfaces are scala.Function1<T1, R> and scala.PartialFunction<T1, R>, while getInterfaces() only returns scala.PartialFunction. Form the scala documentation I can see that the generic information is right, because PartialFunction is a Function1. The javadoc for getInterfaces says: If this object represents a class, the return value is an array containing objects representing all interfaces implemented by the class. The order of the interface objects in the array corresponds to the order of the interface names in the implements clause of the declaration of the class represented by this object. and getGenericInterfaces has the exact same text. From that (and from other texts, including stackoverflow information) I would conclude that order and length of the arrays are equal. Only this is not the case here. Why? I was able to reproduce this with several java7 and java8 so far, didn't try java6 or even java5. EDIT: The javap output for AbstractPartialFunction (only the header of course) is: Compiled from "AbstractPartialFunction.scala" public abstract class scala.runtime.AbstractPartialFunction<T1, R> implements scala.Function1<T1, R>, scala.PartialFunction<T1, R>  Using the asm lib and the Textifier there I can see this header information: // class version 50.0 (50) // access flags 0x421 // signature <T1:Ljava/lang/Object;R:Ljava/lang/Object;>Ljava/lang/Object;Lscala/Function1<TT1;TR;>;Lscala/PartialFunction<TT1;TR;>; // declaration: scala/runtime/AbstractPartialFunction<T1, R> implements scala.Function1<T1, R>, scala.PartialFunction<T1, R> public abstract class scala/runtime/AbstractPartialFunction implements scala/PartialFunction  plus a ScalaSigAttribute. Of course I did not show the methods in both cases #### what have i done wrong trying to call the model with form value This:  def signup = Action { implicit request => signupForm.bindFromRequest.fold( formWithErrors => BadRequest(html.login(loginForm,formWithErrors)), signer => Signup.insert(signer) Redirect(routes.Application.login) ) }  give me this error (value Redirect is not a member of Int possible cause: maybe a semicolon is missing before value Redirect'?) if i comment out Signup.insert(signer) its fine but i want it to call that... but when i use this its fine: def save = IsAuthenticated { username => implicit request => User.findByEmail(username).map { user => personForm.bindFromRequest.fold( formWithErrors => BadRequest(html.person_views.createForm(formWithErrors, user)), person => { Person.insert(person) Redirect(routes.Persons.list()).flashing("success" -> "success") } ) //}.getOrElse(Forbidden) }.getOrElse(Forbidden) }  ### Dave Winer #### Fargo as v2.0 of the Metaweblog API I read an interesting piece by Brent Simmons yesterday about his passion for creating blogging systems. He's thinking about doing a quick one, running in node.js. As I read the piece I recognized a lot of the ideas as ones that we've already implemented in Fargo Publisher, which is an open source, MIT licensed, node.js app. 1. My first thought was wouldn't it be great if Brent used Fargo Publisher as a starting point? That way we might get some new features, maybe some bug fixes, and more users. Having two pieces of software resting on top of an API gives it a much better chance to gain traction. 2. My second thought was that he was planning on using the Metaweblog API, something I designed in 2002, to build on the Blogger API. But we've gone so much further in the intervening 12 years. Metaweblog views each post as a document, but Fargo views a whole website as a document. Suppose you had a computer that could only deal with one file at a time, and one came along that could have as many open windows as you want? Wouldn't you want to at least try it? In that way Fargo 2 is the next step after Metaweblog. Yes, it's too bad progress happens so slowly in tech. 12 years for such a simple evolution. But at least it happened. It would be a shame if we had to wait another 12 years for adoption by users, esp very advanced ones like my longtime buddy, Brent. ### StackOverflow #### Scala case class arguments instantiation from array Consider a case class with a possibly large number of members; to illustrate the case assume two arguments, as in case class C(s1: String, s2: String)  and therefore assume an array with size of at least that many arguments, val a = Array("a1", "a2")  Then scala> C(a(0), a(1)) res9: C = c(a1,a2)  However, is there an approach to case class instantiation where there is no need to refer to each element in the array for any (possibly large) number of predefined class members ? ### /r/netsec #### awesome unknown hash identifier ### UnixOverflow #### pkg2ng throwing tons of errors about unknown keywords So... the pkg_* tools are deprecated, EOL scheduled for September 2014. Time to convert to pkgng. They have provided the pkg2ng tool provided for that. But when I run it, it throws tons of error messages. I don't know if I can ignore them or if that will introduce subtle errors. # pkg2ng Converting packages from /var/db/pkg Converting libsigsegv-2.10... pkg: fopen(/usr/ports/Keywords/display.yaml): No such file or directory pkg: unknown keyword display, ignoring @display Installing libsigsegv-2.10... done Converting m4-1.4.17,1... pkg: fopen(/usr/ports/Keywords/mtree.yaml): No such file or directory pkg: unknown keyword mtree, ignoring @mtree Installing m4-1.4.17,1... done Converting libiconv-1.14_2... pkg: fopen(/usr/ports/Keywords/mtree.yaml): No such file or directory pkg: unknown keyword mtree, ignoring @mtree Installing libiconv-1.14_2... done Converting tdb-1.2.12,1... pkg: fopen(/usr/ports/Keywords/conflicts.yaml): No such file or directory pkg: unknown keyword conflicts, ignoring @conflicts Installing tdb-1.2.12,1... done (and so on)  Google doesn't give me much, only numerous repetitions of the thread that this post stems from: http://lists.freebsd.org/pipermail/freebsd-pkg/2013-June/000052.html I find that pretty weird, it looks as if there were only 2 or 3 people on the planet having this problem, one of whom would be me. So... • Anyone had this problem, too? • Can the error messages be ignored? (But why are they printed, then? Remember, this is the package database, which is pretty central to the system.) • What can I do to rectify the situation. ### StackOverflow #### Idiomatic way to proxy named parameters in Clojure I need a function that thinly wraps amazonica's sqs/receive-message in order to add a default wait time. The function requires a queue URL, and then accepts any number of optional named parameters, which should be passed along to sqs/receive-message untouched. I would like to call it like this: (my-receive-message "https://sqs.us-east-1.amazonaws.com/123/test-q" :max-number-of-messages 10 :delete true)  This should result in a call to sqs/receive-message like this: (sqs/receive-message :queue-url "https://sqs.us-east-1.amazonaws.com/123/test-q" :wait-time-seconds 20 :max-number-of-messages 10 :delete true)  This is something I find myself wanting to do fairly often, but I haven't found a nice way yet. Is there an idiomatic way to do this? #### IMain.valueOfTerm only gets last value import scala.tools.nsc._ import scala.tools.nsc.interpreter._ val settings = new Settings val n = new IMain(settings) n.interpret(""" val y = 5 val x = 10 """) println(n.valueOfTerm("y").get) n.close()  I would expect that println would print 5, the value of y. Instead it prints 10, the value of x. Now if I interpret this: n.interpret(""" val y = 5 val x = 10 y """)  It prints 5, the value of y. Therefor I assume valueOfTerm returns only the last mentioned value. Isn't it intended to return the requested value? Anyone can reproduce this? Or is something wrong with my code? I used Scala 2.10.3 for this setup. Scala Doc IMain #### How to serialize scala BKTree? I am note sure whether i should ask this question here. I have a scala class that extends serializable but when i try saving the class using fileoutputstream,i keep getting NotSerializableException. class BKTree(terms: Seq[String], dist: (String, String) => Int = Levenshtein.distance) extends scala.serializable{ } class BKNode(val name: String,dist: (String, String) => Int = Levenshtein.distance) extends scala.serializable{ } object Levenshtein extends scala.serializable { }  The BKTree generated works perfectely fine,but on trying to save the tree it generates NotSerializableException. ### Lobsters #### Why There Will Never Be Another RedHat: The Economics Of Open Source ### Planet Theory #### David Woodruff receives the Presburger Award 2014 The EATCS is proud to announce that the Presburger Award 2014 Committee has chosen David Woodruff (IBM Almaden Research Center) as the recipient of the Presburger Award 2014. Congratulations to David! Since 2010, the Presburger Award has been given each year to a young scientist (in exceptional cases to several young scientists) for outstanding contributions in theoretical computer science, documented by a published paper or a series of published papers. The award is named after Mojzesz Presburger who accomplished his path-breaking work on decidability of the theory of addition (which today is called Presburger arithmetic) as a student in 1929. The Presburger Award 2014 is sponsored by CWI Amsterdam and will be presented at ICALP 2014 in Copenhagen, Denmark. David Woodruff, born in 1980, has made important contributions to the theory of data streams, both creating new algorithms, and demonstrating that certain algorithms cannot exist. His work has an impact on other fields, including compressed sensing, machine learning, and numerical linear algebra. In the area of data streams, he resolved the Distinct Elements Problem, simultaneously optimizing the amount of memory used, the time needed to process each new entity, and the time needed to report an estimate of the number of distinct elements in the stream. In the area of machine learning, he used his previous results on data streams to design sub-linear algorithms for linear classification and minimum enclosing ball problems. In numerical linear algebra, he developed the first algorithms for low rank approximation and regression that run in time proportional to the number of non-zero entries of the input matrix. His work also resulted in 17 patents related to data streams and their applications. The 2014 Presburger Award Committee consisted of  Antonin Kucera Brno, chair Claire Mathieu ENS Paris Peter Widmayer Zurich ### QuantOverflow #### The Public Market Equivalent measure in private equity What are the advantages and disadvantages of the Public Market Equivalent measure in private equity? Why is it that the volatility of the cash flows do not matter? This topic has been discussed in a paper by Sorensen and Jagannathan 2013, but I can't seem to understand the logic behind it. Thanks! ### StackOverflow #### Custom Scala REPL Issues I'm trying to write a basic Scala REPL using some information I found on various sites. My most basic REPL implementation looks like this, import scala.tools.nsc.Settings import scala.tools.nsc.interpreter._ object BillyREPL extends App { val settings = new Settings settings.usejavacp.value = true settings.deprecation.value = true new ILoop().process(settings) }  With the following build settings, import sbt._ import sbt.Keys._ object BillyREPLBuild extends Build { lazy val billyrepl = Project( id = "billyrepl", base = file("."), settings = Project.defaultSettings ++ Seq( name := "BillyREPL", organization := "tv.yobriefcasts", version := "0.1-SNAPSHOT", scalaVersion := "2.10.1", libraryDependencies ++= Seq( "org.scala-lang" % "scala-compiler" % "2.10.1", "org.scala-lang" % "scala-library" % "2.10.1" ) ) ) }  Attempting to run this however leads to some warnings and eventual error (which I presume are caused by the initial warning), Welcome to Scala version 2.10.1 (Java HotSpot(TM) 64-Bit Server VM, Java 1.7.0_12-ea). Type in expressions to have them evaluated. Type :help for more information. scala> Failed to initialize compiler: object scala.annotation.Annotation in compiler mirror not found. ** Note that as of 2.8 scala does not assume use of the java classpath. ** For the old behavior pass -usejavacp to scala, or if using a Settings ** object programatically, settings.usejavacp.value = true.  And the error when trying to read/evaluate ANYTHING is below. I'm not sure if this is due to some extra missing dependency and I do realise what the error message says suggests this is not common but I wondered if before I open an issue if anyone has dealt with this before? 2 Failed to initialize the REPL due to an unexpected error. This is a bug, please, report it along with the error diagnostics printed below. java.lang.NullPointerException at scala.tools.nsc.interpreter.ExprTyper$codeParser$.applyRule(ExprTyper.scala:24) at scala.tools.nsc.interpreter.ExprTyper$codeParser$.stmts(ExprTyper.scala:35) at scala.tools.nsc.interpreter.ExprTyper$$anonfunparse2.apply(ExprTyper.scala:43) at scala.tools.nsc.interpreter.ExprTyper$$anonfun$parse$2.apply(ExprTyper.scala:42) at scala.tools.nsc.reporters.Reporter.withIncompleteHandler(Reporter.scala:51) at scala.tools.nsc.interpreter.ExprTyper$class.parse(ExprTyper.scala:42)
at scala.tools.nsc.interpreter.IMain$exprTyper$.parse(IMain.scala:1074)
at scala.tools.nsc.interpreter.IMain.parse(IMain.scala:1078)
at scala.tools.nsc.interpreter.IMain$$anonfunshowCodeIfDebugging1.apply(IMain.scala:1168) at scala.tools.nsc.interpreter.IMain$$anonfun$showCodeIfDebugging$1.apply(IMain.scala:1168)
at scala.tools.nsc.interpreter.IMain.beSilentDuring(IMain.scala:238)
at scala.tools.nsc.interpreter.IMain.showCodeIfDebugging(IMain.scala:1168)
at scala.tools.nsc.interpreter.IMain$ReadEvalPrint.compileAndSaveRun(IMain.scala:800) at scala.tools.nsc.interpreter.IMain$ReadEvalPrint.compile(IMain.scala:761)
at scala.tools.nsc.interpreter.IMain.bind(IMain.scala:618)
at scala.tools.nsc.interpreter.IMain.bind(IMain.scala:661)
at scala.tools.nsc.interpreter.IMain$$anonfunquietBind1.apply(IMain.scala:660) at scala.tools.nsc.interpreter.IMain$$anonfun$quietBind$1.apply(IMain.scala:660)
at scala.tools.nsc.interpreter.IMain.beQuietDuring(IMain.scala:232)
at scala.tools.nsc.interpreter.IMain.quietBind(IMain.scala:660)
at scala.tools.nsc.interpreter.ILoop$$anonfunprocess1$$anonfun$apply$mcZ$sp$2.apply$mcV$sp(ILoop.scala:838)
at scala.tools.nsc.interpreter.ILoopInit$class.runThunks(ILoopInit.scala:122) at scala.tools.nsc.interpreter.ILoop.runThunks(ILoop.scala:42) at scala.tools.nsc.interpreter.ILoopInit$class.postInitialization(ILoopInit.scala:95)
at scala.tools.nsc.interpreter.ILoop.postInitialization(ILoop.scala:42)
at scala.tools.nsc.interpreter.ILoopInit$$anonfuncreateAsyncListener1.applymcVsp(ILoopInit.scala:63) at scala.tools.nsc.interpreter.ILoopInit$$anonfun$createAsyncListener$1.apply(ILoopInit.scala:60)
at scala.tools.nsc.interpreter.ILoopInit$$anonfuncreateAsyncListener1.apply(ILoopInit.scala:60) at scala.tools.nsc.io.package$$anon$3.call(package.scala:40) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334)
.


#### From the trenches: espie@ reports on recent experiments in package building

In a recent post to the ports mailing list titled "dpb fun", Marc Espie (espie@) reported on tests running the OpenBSD distributed ports builder on larger than usual hardware and improvements that sprang from the test:

So, I got access to a bunch of fast machines through Yandex. Big kudoes to them. It allowed me to continue working on dpb optimizations for fast clusters, after some tentalizing glimpse into big clusters I got a few months ago thanks to some experiment led by Florian Obser.

The rest of the post follows after the fold, this looks like exciting times are ahead.

### Planet Emacsen

#### Irreal: Fixing the Emacs distnoted Problem on OS X 10.9

With Emacs 24.3 (and possibly earlier versions) under OS X 10.9 there is a nasty problem that causes distnoted, the OS X distributed notifications daemon, to periodically suck up processor resources and basically tie up the machine. Sometimes it recovers on its own, sometimes you have to restart Emacs. That is particularly apt to happen after waking up from sleep mode.

The problem is fixed in the 24.4 release and I’ve been ignoring it while I waited for the new release. The other day, though, I ran out of patience and hunted up a patch I’d seen for it some time ago. If you build Emacs from source, it’s trivial to apply it: just follow the instructions in the patch commentary1.

After applying the patch and rebuilding, everything worked normally again and I haven’t had anymore runaway distnoted problems. Actually, the whole system seems snappier after I installed the patch. That’s not too surprising given that Emacs is always running on my machines. If you’re running Emacs on OS X 10.9, you may want to rebuild Emacs with the patch. As long as you have a C development environment, that’s easy. I don’t know if Homebrew and the other package systems have applied the patch or not.

## Footnotes:

1

For some reason that I’ve long forgotten, I don’t have to do the

make bootstrap


step. If you get a fatal error on the

make install


step, just start over but omit the

make bootstrap
`

step.

### StackOverflow

#### Why functional languages are amenable to parallelization?

There is another thread titled "Programming language for functional parallelism: F# vs Haskell" in which the OP stated "Functional programming has immutable data structures and no side effect which are inherently suitable for parallel programming."

Jon Harrop, in his answer, argued that "Parallelism is solely about performance and purity degrades performance. So purely functional programming is not a good starting point if your objective is to get decent performance."

Well, I am not planning to go into whether functional programming actually improves performance or not; it seems that it is an implementation issue. What I am interested is on the conceptual level:

Are "immutable data" and "freedom from side effect" BOTH required for easy parallelization? And are they sufficient conditions or necessary conditions? Are they more than necessary to guarantee data independence or commutativity? References to academic literature is appreciated.