Planet Primates

December 19, 2014

StackOverflow

Bracket Acces in Scala.js

I'm trying to get the following code from scala-js/test/src/test/scala/scala/scalajs/test/jsinterop/DictionaryTest.scala to run in the Browser in a Scala.js project.

import scala.scalajs.js

val obj = js.eval("var dictionaryTest13 = 
     { a: 'Scala.js', b: 7357 }; dictionaryTest13;")
val dict = obj.asInstanceOf[js.Dictionary[js.Any]]
var propCount = 0
var propString = ""

for (prop <- js.Dictionary.propertiesOf(dict)) {
  propCount += 1
  propString += dict(prop)
}
// g.console.log(...)

It gives me: java.lang.RuntimeException: stub

How can I get this to work and make use of bracket access, e.g. to run through a json Object given from js to scala.js, in analogy to the js pattern: for(i in obj) {obj[i]} ?

That serves the trivial purpose to iterate in a over json datastructure in a way that is not bound to specific attributes.

by user3464741 at December 19, 2014 09:02 AM

DataTau

CompsciOverflow

Why is Relativization a barrier?

When I was explaining the Baker-Gill-Solovay proof that there exists an oracle with which we can have, $\mathsf{P} = \mathsf{NP}$, and an oracle with which we can have $\mathsf{P} \neq \mathsf{NP}$ to a friend, a question came up as to why such techniques are ill-suited for proving the $\mathsf{P} \neq \mathsf{NP}$ problem, and I couldn't give a satisfactory answer. \

To put it more concretely, if I have an approach to prove $\mathsf{P} \neq \mathsf{NP}$ and if I could construct oracles to make a situation like above happen, why does it make my method invalid?

Any exposition/thoughts on this topic?

by Nikhil at December 19, 2014 08:54 AM

StackOverflow

Debugging a standalone jetty server - how to specify single threaded mode?

I have successfully created a standalone Scalatra / Jetty server, using the official instructions from Scalatra ( http://www.scalatra.org/2.3/guides/deployment/standalone.html )

I am debugging it under Ensime, and would like to limit the number of threads handling messages to a single one - so that single-stepping through the servlet methods will be easier.

I used this code to achieve it:

package ...

import org.eclipse.jetty.server.Server
import org.eclipse.jetty.servlet.{DefaultServlet, ServletContextHandler}
import org.eclipse.jetty.webapp.WebAppContext
import org.scalatra.servlet.ScalatraListener
import org.eclipse.jetty.util.thread.QueuedThreadPool
import org.eclipse.jetty.server.ServerConnector

object JettyLauncher {
  def main(args: Array[String]) {
    val port = 
      if (System.getenv("PORT") != null) 
        System.getenv("PORT").toInt 
      else
        4080

    // DEBUGGING MODE BEGINS
    val threadPool = new QueuedThreadPool()
    threadPool.setMaxThreads(8)
    val server = new Server(threadPool)

    val connector = new ServerConnector(server)
    connector.setPort(port)
    server.setConnectors(Array(connector))
    // DEBUGGING MODE ENDS

    val context = new WebAppContext()
    context setContextPath "/"
    context.setResourceBase("src/main/webapp")
    context.addEventListener(new ScalatraListener)
    context.addServlet(classOf[DefaultServlet], "/")

    server.setHandler(context)

    server.start
    server.join
  }
}

It works fine - except for one minor detail...

I can't tell Jetty to use 1 thread - the minimum value is 8!

If I do, this is what happens:

$ sbt assembly
...
$ java -jar ./target/scala-2.11/CurrentVersions-assembly-0.1.0-SNAPSHOT.jar 
18:13:27.059 [main] INFO  org.eclipse.jetty.util.log - Logging initialized @41ms
18:13:27.206 [main] INFO  org.eclipse.jetty.server.Server - jetty-9.1.z-SNAPSHOT
18:13:27.220 [main] WARN  o.e.j.u.component.AbstractLifeCycle - FAILED org.eclipse.jetty.server.Server@1ac539f: java.lang.IllegalStateException: Insufficient max threads in ThreadPool: max=1 < needed=8
java.lang.IllegalStateException: Insufficient max threads in ThreadPool: max=1 < needed=8

...which is why you see setMaxThreads(8) instead of setMaxThreads(1) in my code above.

Any ideas why this happens?

by ttsiodras at December 19, 2014 08:44 AM

required: net.liftweb.util.CanBind [java.lang.String]

I am migrating this code to version 2.5

https://github.com/dpp/simply_lift/blob/master/samples/shopwithme/

When compiling the project

sends me an error like the following:

[error] /lift/shopwithme/src/main/scala/code/comet/CometCart.scala:40: type mismatch;
[error] found: scala.xml.NodeSeq
[error] required: net.liftweb.util.CanBind [java.lang.String]
[error] = val theTR ("tr ^^" #> "**") (ns)
[error] 

                                      ^

https://github.com/dpp/simply_lift/blob/master/samples/shopwithme/src/main/scala/code/comet/CometCart.scala

Can someone help me with this please

by user1034380 at December 19, 2014 08:44 AM

How to convert korma select results to json for a rest service (compojure)?

I am using compojure, cheshire and korma (and postgre db) for creating a rest service. I've created a table with two string fields (name and description) with such structure:

(defentity posts
  (pk :id)
  (table :posts)
  (entity-fields :name :description))

I can insert records into this table but when I try to exec

(defn get-all-posts [] 
  (select posts))

and return the results from the server

defroutes app-routes
 (GET "/" [] (get-start))
 (context "/posts" []
   (GET "/" [] (get-all-posts))
 ...

I receive such an error: java.lang.IllegalArgumentException No implementation of method: :render of protocol: #'compojure.response/Renderable found for class: clojure.lang.PersistentVector

As I see I need to convert posts collection to json. How to do it?

by Curiosity at December 19, 2014 08:39 AM

Akka / Get current position of iterator in ByteString

I have an instance of ByteString in Akka. To read data from it I should use it's iterator() method.

I read some data and then I decide than I need to create a view (separate iterator of some chunk of data).

I can't use slice() of original iterator, because that would make it unusable, because docs says that:

After calling this method, one should discard the iterator it was called on, and use only the iterator that was returned. Using the old iterator is undefined, subject to change, and may result in changes to the new iterator as well.

So, it seems that I need to call slice() on ByteString. But slice() has from and until parameters and I don't know from. I need something like this:

ByteString originalByteString = ...; // <-- This is my input data
ByteIterator originalIterator = originalByteString .iterator();
...
read some data from originalIterator
...
int length = 100; // < -- Size of the view
int from = originalIterator.currentPosition(); // <-- I need this
int until = from + length;
ByteString viewOfOriginalByteString = originalByteString.slice(from, until);
ByteIterator iteratorForView = viewOfOriginalByteString.iterator(); // <-- This is my goal

by bobby at December 19, 2014 08:35 AM

QuantOverflow

optimal portfolio with different borrowing and lending rates

I have some risky assets and risk free assets and I am trying to find out optimal portfolio with constraints like following.

Suppose I wish to obtain an expected return of 12%, what portfolio will you choose? What is the standard deviation of the portfolio?

I know how to solve this and reason behind that

Wp = (rp - rrf)/(rtp - rrf)

Wp = weight of portfolio

rp = returns of portfolio which should be 12% as provided above.

rrf = risk free return

rtp = tangency point retun

but I would like to know how solution for these types of questions changes when we have different rates for borrowing and lending and why?

by ahmed at December 19, 2014 08:17 AM

StackOverflow

Scala: what is the best way to append an element to an Array?

Say I have an Array[Int] like

val array = Array( 1, 2, 3 )

Now I would like to append an element to the array, say the value 4, as in the following example:

val array2 = array + 4     // will not compile

I can of course use System.arraycopy() and do this on my own, but there must be a Scala library function for this, which I simply could not find. Thanks for any pointers!

Notes:

  1. I am aware that I can append another Array of elements, like in the following line, but that seems too round-about:

    val array2b = array ++ Array( 4 )     // this works
    
  2. I am aware of the advantages and drawbacks of List vs Array and here I am for various reasons specifically interested in extending an Array.

Edit 1

Thanks for the answers pointing to the :+ operator method. This is what I was looking for. Unfortunately, it is rather slower than a custom append() method implementation using arraycopy -- about two to three times slower. Looking at the implementation in SeqLike[], a builder is created, then the array is added to it, then the append is done via the builder, then the builder is rendered. Not a good implementation for arrays. I did a quick benchmark comparing the two methods, looking at the fastest time out of ten cycles. Doing 10 million repetitions of a single-item append to an 8-element array instance of some class Foo takes 3.1 sec with :+ and 1.7 sec with a simple append() method that uses System.arraycopy(); doing 10 million single-item append repetitions on 8-element arrays of Long takes 2.1 sec with :+ and 0.78 sec with the simple append() method. Wonder if this couldn't be fixed in the library with a custom implementation for Array?

Edit 2

For what it's worth, I filed a ticket: https://issues.scala-lang.org/browse/SI-5017

by Gregor Scheidt at December 19, 2014 08:07 AM

A simplest way to convert array to 2d array in scala

I have a 10 × 10 Array[Int]

val matrix = for {
    r <- 0 until 10
    c <- 0 until 10
} yield r + c  

and want to convert the "matrix" to an Array[Array[Int]] with 10 rows and 10 columns.

What is the simplest way to do it?

by bourneli at December 19, 2014 07:53 AM

CompsciOverflow

Are there any algorithms where the recovery of a witness changes the time complexity?

In many algorithms, such as the solution to the longest-subsequence problem using dynamic programming, finding the length of an answer (or signaling the nonexistence of an answer) is easy, but recovering the answer itself (in this case, a substring of the maximum possible length) requires some modifications to the algorithm.

Is there any algorithm where doing so necessitates an increase in the time-complexity of the algorithm?

Note that a change in the big-O complexity of the algorithm as written is enough (for example a change from $O(n)$ to $O(n \ln n)$.

Nontrivial, here, means that the answer should refer to an algorithm which can be modified to return the answer (for example, by storing a table), not a problem for which there is a, say, $O(1)$ algorithm that says if an answer exists and a completely different $O(2^n)$ algorithm that can find the answer.

by octatoan at December 19, 2014 07:49 AM

Can anybody explain intuitively why quick sort need log(n) extra space and mergesort need n?

I've searched on internet and everybody said it's stack space needed on recursion. I know log(n) extra space for quick sort happened when use in place, but still I don't get it. Anybody can explain intuitively why quick sort need log(n) extra space and mergesort need n? Thanks.

by Guest at December 19, 2014 07:48 AM

Finding an st-path in a planar graph which is adjacent to the fewest number of faces

I am curious whether the following problems has been studied before, but wasn't able to find any papers about it:

Given a planar graph G, and two vertices s and t, find an st-path $P$ which minimizes the number of distinct faces of G which contain vertices of $P$ on their boundary.

Does anybody know any references?

by Joe at December 19, 2014 07:47 AM

How can I calculate tree sizes to "stretch up" a finger tree?

I've been working on implementing an efficient Cartesian product operation (actually the <*> operation, but it amounts to about the same thing) for sequences based on Hinze and Paterson's "Finger trees: a simple general-purpose data structure". It seems that a natural way to calculate $xs \times ys$ when $xs$ and $ys$ each has at least two elements is to (conceptually) replace each leaf $x$ in $xs$ with a 2-3 tree representing $\{(x,y)\mid y \in ys\}$, and then "stretch" the first and last element of the resulting tree upwards/outwards to bring the tree to the proper depth. But for efficiency and other reasons, this actually needs to be done in the opposite direction—I'd need to pull off the first and last element of $xs$, and then work my way down, calculating at each level how much of $ys$ to leave behind there, and also to check whether it's possible to represent $ys$ by a 2-3 tree of the appropriate size for that level. I'm not really sure how to get a grip on the necessary math.

by dfeuer at December 19, 2014 07:46 AM

Modified Bellman Ford to find minmum cost cycle in O(E²V) time?

I'm thinking about how you can modify Bellman Ford a bit to calculate the minimum weight cycle in an undirected graph with positive weights. Note that the constraint is that the algorithm must run in $O(VE^2)$ times, and Bellman Ford runs $O(VE)$ time so we are looking at a modification that runs $O(E)$

My approach is first make a directed graph by doubling the edges in both directions, then run Bellman Ford to come up with the shortest path, then compute the strongly connected components in this graph. After you have a tree of strongly connected components, calculate the weights around the perimeter of the SCC and then compare the weights. But I'm not sure how you could calculate SCC using Bellman Ford and this algorithm is going to take too long.

Does anyone see an obvious solution?

enter image description here

by Math Newb at December 19, 2014 07:45 AM

StackOverflow

Trying to retrieve Avro data in Spark gives an error

Steps: 1/ created json data, and an Avro schema for it:

{"username":"miguno","tweet":"Rock: Nerf paper, scissors is fine.","timestamp": 1366150681 }
{"username":"BlizzardCS","tweet":"Works as intended.  Terran is IMBA.","timestamp": 1366154481 }

and

{ "type" : "record", "name" : "twitter_schema", "namespace" : "com.miguno.avro", "fields" : [ { "name" : "username", "type" : "string", "doc" : "Name of the user account on Twitter.com" }, { "name" : "tweet", "type" : "string", "doc" : "The content of the user's Twitter message" }, { "name" : "timestamp", "type" : "long", "doc" : "Unix epoch time in seconds" } ], "doc:" : "A basic schema for storing Twitter messages" }

(thanks to Michael Noll)

  1. Converted to Avro:

java -jar ~/avro-tools-1.7.4.jar fromjson --schema-file twitter.avsc twitter.json > twitter.avro

  1. Put on hdfs:

hadoop fs -copyFromLocal twitter.avro

  1. In Spark CLI, issued:

import org.apache.avro.generic.GenericRecord

import org.apache.avro.mapred.{AvroInputFormat, AvroWrapper}

import org.apache.hadoop.io.NullWritable

val path = "hdfs:///path/to/your/avro/folder" val avroRDD =

sc.hadoopFileAvroWrapper[GenericRecord], NullWritable, AvroInputFormat[GenericRecord]

However when doing a:

avroRDD.first

i get:

org.apache.spark.SparkException: Job aborted due to stage failure: Task 2.0 in stage 7.0 (TID 13) had a not serializable result: org.apache.avro.mapred.AvroWrapper at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1185) at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1174) at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1173) at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59) at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47)

What gives? I read somewhere that i need to use Kryo serialization, because spark uses java serialization requiring serialized objects to implement Serializable, but AvroKey doesn't. Is there a different solution? Will Avro have Scala support at some point?

thanks, Matt

by matthieu lieber at December 19, 2014 07:33 AM

TheoryOverflow

Is this problem related to subset-sum?

Let $\mathcal{X}$ be a compact, convex subset of $\mathbb{R}^M$ ($M$-dimensional real euclidean space). Consider the problem \begin{align} \max_{\substack{\mathcal{F}\subset\{1,\dots,M\} \\ \mathbf{x}\in\mathcal{X}}} |\mathcal{F}| \\ s.t.~~~x_i\leq -1~,~~i\in\mathcal{F} \end{align} Here the objective function is the cardinality of set of indices $\mathcal{F}$. Intuitively, given any arbitrary compact convex subset, we need to pick a point whose as many number of co-ordinates possible are less than -1.

by Nomad at December 19, 2014 07:30 AM

QuantOverflow

Mean Variance Higher Borrowing

enter image description here

Is it possible to have to such kind of mean variance frontier with higher borrowing rate (showing as returns decreases risk increases). If its possible please tell me implications behind such scenarios.

by ahmed at December 19, 2014 07:22 AM

StackOverflow

Upgrade remote FreeBSD server no KVM

Environment:

  • Remote dedicated server
  • FreeBSD 9.1-release with custom kernel [quota,ipfirewall,ipfirewall_default_to_accept]
  • SSH access
  • No easy KVM access (only accessible by my provider)

Goal:

  • Upgrade to FreeBSD 9.2-release (for a start then... to 10.1...)

Reason:

  • Freebsd 9.1-release is supported only until Dec 30 2014

Question:

How to upgrade FreeBSD 9.1-RELEASE to 10.1-RELEASE for instance, using freebsd-update or rebuilding the kernel and world, knowing that the server can't be accessed after being rebooted in single user mode because of the remote situation.

I read about kern.securelevel set to 2 to allow installing the world without being in single user mode, what to think about it?

How to upgrade the kernel from 9.2 sources without the 9.1 base system (it breaks, CC header problem)?

How would you proceed?

by Will SxM at December 19, 2014 07:16 AM

/r/netsec

CompsciOverflow

Find set of points with maximum distance inside given intervals?

Let $A$ be a set of $n$ closed intervals, $I_i$, with both extremes positive integers. Is there an efficient algorithm to find a set of $n$ points $P_i$, with $P_i \in I_i$, such that the minimum distance between all pairs of points is maximized?

Assume that the intervals are bounded by a positive integer $C$.

What's the complexity of this problem?

If the intervals are non-overlapping, I know how to solve this with linear programming (indeed, using a system of difference constraints and Bellman-Ford), but what can we say about the general case where intervals might overlap?

by becko at December 19, 2014 07:04 AM

Universal algorithm testing framework [on hold]

Currently I'm working on my thesis. I'm writing about subgraph isomorphism problem and various algorithms, solving this problem. One of my task is to integrate that problem in a testing framework, developed by my mentor. It's some kind of universal framework, where you define a problem, implement some algorithms, solving the problem, perform some tests, analyze and compare different algorithms...

I've searched the web about similar frameworks, but I haven't found any framework, similar to the one described above. Is there really no such thing as universal framework to define some problem, run algorithms and analyse the results?

by peterremec at December 19, 2014 06:54 AM

/r/compsci

Is it important to keep compsci textbooks?

As the semester is about to start once more, I'm debating whether I should purchase hardcopies of my textbook, rent them, or download them as PDFs. I dislike reading textbooks on the screen but to save couple hundred $$ I am willing to bear it.

But a point my friend made was that I'll probably end up using some of my books post-graduation. So the question is, is it worth buying books like Introduction to Algorithms and post undergrad, do you find yourself referring back to it?

submitted by TheMokaPot
[link] [3 comments]

December 19, 2014 06:40 AM

StackOverflow

Majordomo broker throughput measurement

I am testing the majordomo broker's throughput. The test_client.c that comes along with the majordomo code on github sends synchronous request. I am wanting to test the maximum throughput that the majordomo broker can achieve. The specifications (http://rfc.zeromq.org/spec:7) say that it can switch upto a million messages per second.

First, I changed the client code to send 100k requests asynchronously. Even after setting the HWM on all the sockets sufficiently high, and increasing the TCP buffers to 4 MB, I was observing packet loss with three clients running in parallel.

So I changed the client to send 10k requests at once, and then send two requests for every reply that it receives. I chose 10k because that allowed me to run up to ten clients (each sending 100k messages) in parallel without any packet loss. Here is the client code:

#include "../include/mdp.h"
#include <time.h>
int main (int argc, char *argv [])
{
    int verbose = (argc > 1 && streq (argv [1], "-v"));
    mdp_client_t *session = mdp_client_new (argv[1], verbose);
    int count1, count2;
    struct timeval start,end;
    gettimeofday(&start, NULL);
    for (count1 = 0; count1 < 10000; count1++) {
        zmsg_t *request = zmsg_new ();
        zmsg_pushstr (request, "Hello world");
        mdp_client_send (session, "echo", &request);
    }
    for (count1 = 0; count1 < 45000; count1++) {
        zmsg_t *reply = mdp_client_recv (session,NULL,NULL);
        if (reply)
        {
            zmsg_destroy (&reply);
            zmsg_t *request = zmsg_new ();
            zmsg_pushstr (request, "Hello world");
            mdp_client_send (session, "echo", &request);
            request = zmsg_new ();
            zmsg_pushstr (request, "Hello world");
            mdp_client_send (session, "echo", &request);
        }
        else
            break; //  Interrupted by Ctrl-C
    }

    /* receiving the remaining 55k replies */
    for(count1 = 45000; count1 < 100000; count1++)
    {
        zmsg_t *reply = mdp_client_recv (session,NULL,NULL);
        if (reply)
        {
            zmsg_destroy (&reply);
        }
        else
        break;
    }
    gettimeofday(&end, NULL);
    long elapsed = (end.tv_sec - start.tv_sec) +((end.tv_usec - start.tv_usec)/1000000);
    printf("time = %ld\n", elapsed);
    printf ("%d replies received\n", count1);
    mdp_client_destroy (&session);
    return 0;
}

I ran the broker, worker, and the clients within the same machine. Here is the recorded time:

number of clients in parallel 
(each client sends 100k )                           Time elapsed (seconds)

1                                                   4                

2                                                   9

3                                                   12

4                                                   16

5                                                   21

10                                                  43

So for every 100k requests, the broker is taking about 4 seconds. Is this the expected behavior? Am not sure how to achieve million messages per second.

LATEST UPDATE:

I came up with an approach to improve the throughput of the system:

  1. Two brokers instead of one. One of the brokers (broker1) is responsible for sending the client requests to the workers, and the other broker (broker2) is responsible for sending the response of the workers to the clients.

  2. The workers register with broker1.

  3. The clients generate a unique id and register with broker2.

  4. Along with the request, a client also sends its unique id to broker1.

  5. Worker extracts the unique client id from the request, and sends its response (along with the client id to whom the response has to be sent) to broker2.

Now, every 100k requests take around 2 seconds instead of 4 seconds (when using a single broker). I added gettimeofday calls within the broker code to measure how much latency is added by the broker itself.

Here is what I have recorded

  1. 100k requests (total time: ~2 seconds) -> latency added by the brokers is 2 seconds
  2. 200k requests (total time: ~4 seconds) -> latency added by the brokers is 3 seconds
  3. 300k requests (total time: ~7 seconds) -> latency added by the brokers is 5 seconds

So the bulk of the time is being spent within the broker code. Could someone please suggest how to improve this.

by user1274878 at December 19, 2014 06:40 AM

/r/compsci

Computer science twilight zone.

A month ago I scored perfect on a computer science assignment question. It's a proof. But get this: I've spent ten minutes now trying to understand my answer or the answer sheet answer, and I'm completely lost. In fact, I don't even understand the question! It's like my doppelgänger did the question for me.

It's an extremely weird experience seeing a detailed explanation from yourself a month ago that you do not understand.

submitted by Zulban
[link] [4 comments]

December 19, 2014 06:24 AM

StackOverflow

transform do syntax to >>= with ((->) r) monad

On the page http://en.wikibooks.org/wiki/Haskell/do_Notation, there's a very handy way to transform the do syntax with binding to the functional form (I mean, using >>=). It works well for quite a few case, until I encountered a piece of code involving functions as monad ((->) r)

The code is

addStuff :: Int -> Int  
addStuff = do  
    a <- (*2)  
    b <- (+10)  
    return (a+b)

this is equivalent as define

addStuff = \x -> x*2+(x+10)

Now if I use the handy way to rewrite the do part, I get

addStuff = (*2) >>= \a ->
           (+10) >>= \b ->
           a + b

which gives a compiling error. I understand that a, b are Int (or other types of Num), so the last function (\b -> a + b) has type Int -> Int, instead of Int -> Int -> Int.

But does this mean there's not always a way to transform from do to >>= ? Is there any fix to this? Or I'm just using the rule incorrectly?

by Bo-Xiao Zheng at December 19, 2014 06:23 AM

writing a graphical windowing system : pre-requisite knowledge? [on hold]

i had led a team in 2001 to write a graphical windowing system atop the linux framebuffer. my role had been more of a program manager and application developer, hence don't know much about the lower level stuff.

am interested in writing a graphical windowing system for freebsd, one which is distinct from the x windowing system.

i believe this can be achieved by using "wscons".

would like to know;

  1. pre-requisite knowledge over and above being fluent in a low level language like c or c++,
  2. any books i could work through? (have heard about the one by andries van dam).

by Mayuresh Kathe at December 19, 2014 06:21 AM

QuantOverflow

Is there anyway to easily estimate and forecast seasonal ARIMA-GARCH model in any software?

I use R to estimate an seasonal ARIMA(8,0,0)(5,0,1)[7] model for the seasonal differences of logs of daily electricity prices:

daily.fit <- arima(sd_l_daily_adj$Price,
                   order=c(8,0,0),
                   seasonal=list(order=c(5,0,1), period=7),
                   xreg = sd_l_daily_adj$Holiday,
                   include.mean=FALSE)

Problem is that all the packages I've tried, only the R's base arima function allows for the seasonal specification. Packages with GARCH estimation functions such as fGarch and rugarch only allow for ordinary ARMA(p, q) specification for the mean equation.

Any suggestions for any kind of software are welcome,

Thanks

by stofer at December 19, 2014 06:14 AM

StackOverflow

Represent long in least amount of characters

I need to represent both very large and small numbers in the shortest string possible. The numbers are unsigned. I have tried just straight Base64 encode, but for some smaller numbers, the encoded string is longer than just storing the number as a string. What would be the best way to most optimally store a very large or short number in the shortest string possible with it being URL safe?

by Wiz at December 19, 2014 06:07 AM

/r/compsci

Algorithm close to Travelling Salesman?

As I rock my baby to sleep in my left arm, I sit here bored with phone in right hand.

I'm far removed from uni and the TSP, but the following problem has come to my brain while I rock here for hours, haha.

Let's say a salesman wants to do business. He must do business in exactly 5 countries, and only 1 city per country. Only 5 countries are given, so every country must be used.

We'll say 5 cities are listed for every country. Each city has a 'travel to' cost, and a 'business amount to be made $'.

For example, one of the countries might be Canada. It may cost $637 to travel to Vancouver and $2100 can be made there. It may cost $400 to travel Toronto and $1500 can be made there.

The price of future travel is not dependent on where you are. So we'll say each trip must be round trip and then you fly out of your home base again.

You have a travel budget of say $2000.

How would you solve this, especially for extremely large datasets? You could easily do this in 55, and figure out all permutations but that won't work for large data sets. Is there a better way?

TL:DR;

Algorithm Rules

  • There are 5 countries
  • There are 5 cities in each country
  • $2000 flight budget
  • The business person MUST fly to each country once (therefore one city per country). The business person cannot spend all their money flying to 4 countries, and not have enough flight budget left to fly to the 5th. They must go to all.
  • All flights must be 'round trip' from the business persons home city. So the business person does not fly to vancouver, only to find out flight prices have since changed to Shanghai. The flight prices don't change.
  • The goal is for the business person to make optimum business profit with the $2000 flight budget.
  • The flight budget does not impact the business profit whatsoever. If the business person only uses $1800 in flights, he cannot count $200 to his final business profit. The flights are a company write off, so he does not keep the leftover.
submitted by LaserWolfTurbo72
[link] [13 comments]

December 19, 2014 05:19 AM

QuantOverflow

Historical Financial Statement to Backtest in R

I would like to preface this by saying I am preparing for an upcoming internship this summer so I am extremely new to Quant Finance.

At my university we have access to Datastream by Thomson Reuters and Compustat. With either of those two software can I download historical financial statements going back 20+ years so I can backtest screens in R.

I am looking for historical financial statements so I can backtest ratios such as ROE, Inventory turnover, and etc. This is why the free Yahoo! API which only has price changes and volume doesn't help me out.

Thank you.

by Scott Nunez at December 19, 2014 05:09 AM

StackOverflow

Getting a field in Scala json4s

How do I get a particular field out of a Json object in Scala? I feel like I'm going in circles.

import org.json4s._
import org.json4s.jackson.JsonMethods._

val me = parse(""" {"name":"brian", "state":"frustrated"} """)

Now I want just the state. I was looking for something like

me("state") -> "frustrated"

I have tried

me("state")
me.get("state")
me \ "state" <thanks for the idea>
me['state']
me.state
me.NOOOOOOOOOO!!!!!!!

Help?

by Brian Dolan at December 19, 2014 05:07 AM

Wondermark

XKCD

Wondermark

Check out: Design inspiration from Aaron Draplin

I really like this. Aaron Draplin, founder of Draplin Design Company in Portland and designer behind the Field Notes brand of notebooks, designs a logo from scratch in about ten minutes.

The resulting video is an explanation of the process a thoughtful designer goes through, and a demonstration of the power that experience and deep understanding brings to any sort of craftsmanship.

I find this sort of thing super inspiring! And Draplin has an easygoing, chummy enthusiasm that’s fun to listen to, too.

Here he is again, describing the workflow for creating a laurel element in Illustrator — but far more than just a design tutorial, it’s a metaphor for a deeper and more broadly applicable lesson about craft in general.

This third video is a brief bit of portfolio advice (that, ironically, uses ugly title cards I’m sure Draplin himself would make fun of).

Recommended for inspiration!

AdWeek has collected a few more videos of Draplin’s lengthier public talks and presentations, as well.

by David Malki at December 19, 2014 04:55 AM

Planet Clojure

My First Leiningen Template

Every time I sit down to write a quick piece of code for a blog post, it starts with "lein new." This is amazing and wonderful: it's super fast to set up a clean project. Good practice, good play.[1]

But not fast enough! I usually start with a property-based test, so the first thing I do every time is add test.check to the classpath, and import generators and properties and defspec in the test file. And now that I've got the hang of declaring input and output types with prismatic.schema, I want that everywhere too.

I can't bring myself to do this again - it's time to shave the yak and make my own leiningen template.

The instructions are good, but there are some quirks. Here's how to make your own personal template, bringing your own favorite libraries in every time.

It's less confusing if the template project directory is not exactly the template name, so start with:

  lein new template your-name --to-dir your-name-template
  cd your-name-template

Next, all the files in that directory are boring. Pretty them up if you want, but the meat is down in src/leiningen/new.

In src/leiningen/new/your-name.clj is the code that will create the project when your template is activated. This is where you'll calculate anything you need to include in your template, and render files into the right location. The template template gives you one that's pretty useless, so I dugging into leiningen's code to steal and modify the default template's definition. Here's mine:

(defn jessitron
 [name]
 (let [data {:name name
             :sanitized (sanitize name)
             :year (year)}]
  (main/info "Generating fresh project with test.check and schema.")
  (->files data
     ["src/{{sanitized}}/core.clj" (render "core.clj" data)]
     ["project.clj" (render "project.clj" data)]
     ["README.md" (render "README.md" data)]
     ["LICENSE" (render "LICENSE" data)]
     [".gitignore" (render "gitignore" data)]
     ["test/{{sanitized}}/core_test.clj" (render "test.clj" data)]))

As input, we get the name of the project that someone is creating with our template.
The data map contains information available to the templates: that's both the destination file names and the initial file contents. Put whatever you like in here.
Then, set the message that will appear when you use the template.
Finally, there's a vector of destinations, paired with renderings from source templates.

Next, find the template files in src/leiningen/new/your-name/. By default, there's only one. I stole the ones leiningen uses for the default template, from here. They didn't work for me immediately, though: they referenced some data, such as {{namespace}}, that wasn't in the data map. Dunno how that works in real life; I changed them to use {{name}} and other items provided in the data.

When it's time to test, two choices: go to the root of your template directory, and use it.

lein new your-name shiny-new-project

This feels weird, calling lein new within a project, but it works. Now
cd shiny-new-project
lein test

and check for problems. Delete, change the template, try again.

Once it works, you'll want to use the template outside the template project. To get this to work, first edit project.clj, and remove -SNAPSHOT from the project version.[3] Then

lein install

Done! From now on I can lein new your-name shiny-new-project all day long.

And now that I have it, maybe I'll get back to the post I was trying to write when I refused to add test.check manually one last time.


[1] Please please will somebody make this for sbt? Starting a Scala project is a pain in the arse[2] compared to "lein new," which leans me toward Clojure over Scala for toy projects, and therefore real projects.

[2] and don't say use IntelliJ, it's even more painful there to start a new Scala project.

[3] At least for me, this was necessary. lein install didn't get it into my classpath until I declared it a real (non-snapshot) version.

by JessiTRON (noreply@blogger.com) at December 19, 2014 04:54 AM

QuantOverflow

Why Beta Distribution for Credit Migration

When modelling credit migration probabilities (e.g. AAA to AA), research has indicated the use of the Beta Distribution simply because it fits empirical data. My question is;

What are some other pros and what are some cons of modelling using this distribution? Are there any other distributions that could possibly be used?

by Jim at December 19, 2014 04:49 AM

StackOverflow

quoting choices based on ~ and ~@ in Clojure macro

I have two different Clojure macros, but based on the operation (~@ and ~), I need to quote the input or not.

(defmacro make2 [t]
  `(list 1 ~@t)) 

(defmacro make3 [t]
  `(list 1 ~t))

(make2 (1 2 3)) -> (1 1 2 3)

(make3 '(1 2 3)) -> (1 (1 2 3))

Why is this? I can guess that with macro, the arguments are not evaluated (that's the reason why make2 doesn't cause an error). However after getting the arguments, I'm not sure the logic to process them.

by prosseek at December 19, 2014 04:36 AM

List of random values with Rng library

I am looking through Rng sources to see how they generate a list of random values.

They define a function fill:

def fill(n: Int): Rng[List[A]] = sequence(List.fill(n)(this))

where sequence is just an invocation of Traverse.sequence from scalaz:

def sequence[T[_], A](x: T[Rng[A]])(implicit T: Traverse[T]): Rng[T[A]] =
  T.sequence(x)

In other words they create a temporary list List[Rang[A]] and then apply sequence: List[Rng[A]] => Rng[List[A]]. I see how it works but the temporary list looks list a waste of memory to me. Is it absolutely necessary ? Can it be improved ?

by Michael at December 19, 2014 04:36 AM

json format building in scala

I have a list:

val k = List(protocol,sourceip,destinationip,sport,dport)

I need to print this in Json format as

(("key" -> "received") ~ ("values" -> jsObjectMap))

where jsObjectMap corresponds to

(x => ("name", protocol) ~ ("value", sourceip+destinationip+sport+dport))

grouping by each protocol. How to build this format?

by user3823859 at December 19, 2014 04:29 AM

Checking the equality of types involving existentials in Scala

I'm writing a function of the form

def test[A,B](a: A, b: B)(implicit eq: A =:= B): Unit = ...

where I require an evidence that types A and B are the same.

I would expect calls of the form test(a,a) to compile for any a, but it seems not to be the case when a's type involves existentials, like in

case class Foo[T](c: T)
val l = List(Foo(1), Foo(1.0F), Foo(0.0)) // Inferred type: List[Foo[_ >: Double with Float with Int <: AnyVal]]

test(l.head, l.head) // Does not compile, error like: Cannot prove that Foo[_7] =:= Foo[_7].

So my question is: am I mis-using =:=? Or could it be a bug? Or a fundamental limitation of existentials? Or a limitation of their implementation?

Context

I'm testing the return type of a function f involving dependent types. I expect it to return the same type for a and b in val a = f(...); val b = f(...), thus I call test(a, b). If a and b's types involve existentials, even test(a,a) and test(b,b) don't compile, as described above.

by al3xar at December 19, 2014 04:21 AM

Spark streaming on Yarn Error while creating FlumeDStream java.net.BindException: Cannot assign requested address

I am trying to create spark stream from flume push based approach .I am running spark on my Yarn cluster.while starting the stream it is unable to bind the requested address. I am using scala-shell to execute the program ,below is the code I am using

import org.apache.spark.streaming.StreamingContext
import org.apache.spark.streaming.StreamingContext._
import org.apache.spark.streaming.Seconds
import org.apache.spark.streaming.flume._
var ssc = new StreamingContext(sc,Seconds(60))
var stream = FlumeUtils.createStream(ssc,"master.internal", 5858);
stream.print()
stream.count().map(cnt => "Received " + cnt + " flume events." ).print()
ssc.start()
ssc.awaitTermination()

Flume Agent is unable to write to this port since this code is unable to bind 5858 port.

Flume Stack Trace :


 [18-Dec-2014 15:20:13] [WARN] [org.apache.flume.sink.AbstractRpcSink.start(AbstractRpcSink.java:294) 294] Unable to create Rpc client using hostname: hostname, port: 5858
org.apache.flume.FlumeException: NettyAvroRpcClient { host: hadoop-master.nycloudlab.internal, port: 7575 }: RPC connection error
        at org.apache.flume.api.NettyAvroRpcClient.connect(NettyAvroRpcClient.java:178)
        at org.apache.flume.api.NettyAvroRpcClient.connect(NettyAvroRpcClient.java:118)
        at org.apache.flume.api.NettyAvroRpcClient.configure(NettyAvroRpcClient.java:624)



Caused by: java.io.IOException: Error connecting to /hostname:port
        at org.apache.avro.ipc.NettyTransceiver.getChannel(NettyTransceiver.java:280)
        at org.apache.avro.ipc.NettyTransceiver.<init>(NettyTransceiver.java:206)
        at org.apache.avro.ipc.NettyTransceiver.<init>(NettyTransceiver.java:155)
        at org.apache.flume.api.NettyAvroRpcClient.connect(NettyAvroRpcClient.java:164)
        ... 18 more
Caused by: java.net.ConnectException: Connection refused
        at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
        at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:567)
        at org.jboss.netty.channel.socket.nio.NioClientSocketPipelineSink$Boss.connect(NioClientSocketPipelineSink.java:396)
        at org.jboss.netty.channel.socket.nio.NioClientSocketPipelineSink$Boss.processSelectedKeys(NioClientSocketPipelineSink.java:358)
        at org.jboss.netty.channel.socket.nio.NioClientSocketPipelineSink$Boss.run(NioClientSocketPipelineSink.java:274)
        ... 3 more

Stack Trace from spark streaming as below.

    14/12/18 19:57:48 ERROR scheduler.ReceiverTracker: Deregistered receiver for stream 0: Error starting receiver 0 - org.jboss.netty.channel.ChannelException: Failed to bind to: <server-name>/IP:5858
        at org.jboss.netty.bootstrap.ServerBootstrap.bind(ServerBootstrap.java:272)
        at org.apache.avro.ipc.NettyServer.<init>(NettyServer.java:106)
        at org.apache.spark.streaming.flume.FlumeReceiver.initServer(FlumeInputDStream.scala:157)
        at org.apache.spark.streaming.flume.FlumeReceiver.onStart(FlumeInputDStream.scala:171)
        at org.apache.spark.streaming.receiver.ReceiverSupervisor.startReceiver(ReceiverSupervisor.scala:121)
        at org.apache.spark.streaming.receiver.ReceiverSupervisor.start(ReceiverSupervisor.scala:106)
        at org.apache.spark.streaming.scheduler.ReceiverTracker$ReceiverLauncher$$anonfun$9.apply(ReceiverTracker.scala:264)
        at org.apache.spark.streaming.scheduler.ReceiverTracker$ReceiverLauncher$$anonfun$9.apply(ReceiverTracker.scala:257)
        at org.apache.spark.SparkContext$$anonfun$runJob$4.apply(SparkContext.scala:1121)
        at org.apache.spark.SparkContext$$anonfun$runJob$4.apply(SparkContext.scala:1121)
        at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:62)
        at org.apache.spark.scheduler.Task.run(Task.scala:54)
        at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:177)
        at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
        at java.lang.Thread.run(Thread.java:662)
Caused by: java.net.BindException: Cannot assign requested ad`enter code here`dress
        at sun.nio.ch.Net.bind(Native Method)
        at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:126)
        at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:59)
        at org.jboss.netty.channel.socket.nio.NioServerBoss$RegisterTask.run(NioServerBoss.java:193)
        at org.jboss.netty.channel.socket.nio.AbstractNioSelector.processTaskQueue(AbstractNioSelector.java:366)
        at org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:290)
        at org.jboss.netty.channel.socket.nio.NioServerBoss.run(NioServerBoss.java:42)
        ... 3 more

by Aaditya Raj at December 19, 2014 04:10 AM

/r/netsec

TheoryOverflow

Assign each biclique to a distinct left

Given a minimum biclique edge cover, is it always possible to assign each biclique to a distinct left node (which it contains)?

ie one such assignment for this graph (from wikipedia): http://upload.wikimedia.org/wikipedia/commons/f/f7/Bipartite-dimension-biclique-cover.svg could be

(numbering the left vertices from 1 through 5 going down)

blue: 1

red: 2

green: 3

black: 4

by dspyz at December 19, 2014 03:54 AM

StackOverflow

clojure quotes and tilde in macros

I am new to Clojure and I am having trouble understanding its quoting system. I am writing a macro and I have made two similar cases - one works, the other doesn't. In a sense, I am just trying to surround my statement with try/catch condition.

Here's the code that works:

(defmacro safe
  [arg1 arg2]
  (list 'let arg1 arg2)
)

Here's the code that doesn't work

(defmacro safe
    [arg1 arg2]
    '(try
        ~(list 'let arg1 arg2)
        (catch Exception e (str "Error: " (.getMessage e)))
    )
)

after the ~ symbol, it should escape the quotes, but for some reason, it seems like it doesn't. The Error is: "Unable to resolve symbol: arg1 in this context...".

Thanks for your help!


Edit:

Code that I call the macro with:

(println (safe [s (new FileReader (new File "text.txt"))] (.read s)))

Also, I import this:

(import java.io.FileReader java.io.File)

Goal is to read first symbol from a file, while being safe from incorrect text file names. This is my school assignment by the way, so I should not use any other way to do it and the macro has to be called like that, I know about the with-open, etc.

by Narmin Alieva at December 19, 2014 03:26 AM

rc.d script not working

I am trying to get Tiddlywiki in a FreeBSD jail (based on NAS4FEE host) The rc.d script below start_postcmd / PID is not working. Any advise on why that might be the case?

#! /bin/sh
#
#
# PROVIDE: tiddliwiki
# REQUIRE: NETWORKING
# REQUIRE: DAEMON bgfsck
# KEYWORD: shutdown


. /etc/rc.subr

name="tiddlywiki"
rcvar="tiddlywiki_enable"

#start_cmd="tiddlywiki_start"
#stop_cmd="tiddlywiki_stop"

pidfile="/var/run/${name}.pid"



start_cmd="/usr/local/bin/node /usr/local/bin/tiddlywiki gosh --server 80 &"

# needed to set pid manualy as the rc.subr pid pid didn't work  
start_postcmd="sleep 5 ; ps aux | grep -i 'gosh --server 80' | awk 'NR<2 {print $2}' > /var/run/${name}.pid"

stop_cmd="cat /var/run/${name}.pid | xargs kill -9"

load_rc_config $name 
run_rc_command "$1"

by user3532518 at December 19, 2014 03:25 AM

Planet Emacsen

Grant Rettke: Using Units in Calc with Emacs

Calc is an advanced desk calculator and mathematical tool written by Dave Gillespie that runs as part of the GNU Emacs environment.

One special interpretation of algebraic formulas is as numbers with units. For example, the formula 5 m / s^2 can be read “five meters per second squared.”

Of course it can!

by Grant at December 19, 2014 03:19 AM

StackOverflow

Why would I use type T = <type> instead of trait[T]?

I'm not 100% certain I worded the question correctly, but let me show you an example of what I mean from a chapter in Programming Scala.

In Scala, I often see an abstract trait pattern:

trait Abstract {
  type T
  def transform(x: T): T
  val initial: T
  var current: T
}

class Concrete extends Abstract {
  type T = String
  def transform(x: String) = x + x
  val initial = "hi"
  var current = initial
}

Why would I do the above instead of the parameterized generic pattern below (which I'm more familiar with from imperative languages):

trait Abstract[T] {
  def transform(x: T): T
  val initial: T
  var current: T
}

class Concrete extends Abstract[String]{
  def transform(x: String): x + x
  val initial: "hi"
  var current: initial
}

by Jordan Parmer at December 19, 2014 03:06 AM

play-cache -- memorize an Future with expiration which depends on the value of future

Given following functions

val readV : String => Future[V]
val isExpired: V => Boolean

How to memorize the result of readV until it isExpiredby using play cache(or something else)

Here is how I did:

  def getCached(k: String) = Cache.getAs[Future[V]](k)
  def getOrRefresh(k: String) = getCached(k).getOrElse {
    this.synchronized {
      getCached(k).getOrElse {
        val vFut = readV(k)
        Cache.set(k, vFut)
        vFut
      }
    }
  }
  def get(k: String) = getOrRefresh(k).flatMap {
    case v if !isExpired(v) =>  Future.successful(v)
    case _ =>
      Cache.remove(k)
      getOrRefresh(k)
  }

This is too complicated to ensure correctness

Is there any simpler solution to do this.

by jilen at December 19, 2014 03:06 AM

Portland Pattern Repository

TheoryOverflow

Identifying ambiguities in inductively learned concepts

I'm looking at ways in which "ambiguities" can be identified in labeled training data by a system undergoing some sort of inductive learning process. Do you know if there is any literature on this topic, and if so, where I could find that literature / what keywords to search for?

To better illustrate what I'm looking for, there is a classic parable of machine learning told by e.g. Dreyfus and Dreyfus (What Artificial Experts Can and Cannot Do, 1992) of an algorithm intended to classify whether or not pictures of woods contained a tank concealed between the trees. Pictures of empty woods were taken one day; pictures with concealed tanks were taken the next. The classifier identified the latter set with great accuracy, and tested extremely well on the portion of the data that had been withheld from training. However, the system performed poorly on new images. The first set of pictures were taken on a sunny day, whereas the latter were taken on a cloudy day. The classifier was not identifying tanks, it was identifying image brightness.

This leads to a question of how, in principle, one could take labeled training data and generate "ambiguities" that could be raised to user attention. The goal is not to generate "borderline" cases (e.g. images with medium brightness), the goal is to identify additional features in the data that could explain the training labels so that these can be turned into queries (e.g. "these labels can be explained both by brightness and by tanks").

I can think of a number of ways to approach this problem, but I'm sure it's already been tackled in various different ways. Can anyone point me towards the literature on this question?

by Nate at December 19, 2014 02:52 AM

StackOverflow

clojure.lang.Compiler$CompilerException in defmacro with Clojure

I can see `, ~, @ works fine with Clojure.

(def x '(1 2 3 4))
`(1 2 3 ~x) => (1 2 3 (1 2 3 4))
`(1 2 3 ~@x) => (1 2 3 1 2 3 4)

However, when I tried to use them in the macro:

(defmacro fa [x]
  `(1 2 3 ~x))

(fa x)

I got clojure.lang.Compiler$CompilerException error. What might be wrong?

2. Unhandled clojure.lang.Compiler$CompilerException
   Error compiling: /Users/smcho/Desktop/macro.clj:1:10

1. Caused by java.lang.ClassCastException
   java.lang.Long cannot be cast to clojure.lang.IFn

by prosseek at December 19, 2014 02:48 AM

DragonFly BSD Digest

CompsciOverflow

Fast implementation of basic addition algorithm

Write code for a modified version of the Grade School addition algorithm that adds the integer one to an m-digit integer. Thus, this modified algorithm does not even need a second number being added. Design the algorithm to be fast, so that it avoids doing excessive work on carries of zero.

I encountered this question looking over last year's final for my algorithms course. I'm not really sure how to answer it, although it seems like it isn't a very challenging question.

by Chaz at December 19, 2014 02:42 AM

StackOverflow

Clojure's equivalent to Lisp's atom function

I have this Lisp code, and I'm trying to convert it into Clojure code.

(defun copy-tree (tr)
  (if (atom tr)
    tr
    (cons (copy-tree (car tr))
          (copy-tree (crd tr)))))

It seems like that Clojure doesn't have Lisp's atom (or atom in Clojure has very different meaning), I had to modify the code as follows. (Am I using atom? wrong or there is something else....?)

(defn single-valued?
     [x]
     (not (or (nil? x) 
              (.. x getClass isArray)
              (some #(instance? % x) [clojure.lang.Counted
                                      clojure.lang.IPersistentCollection
                                      java.util.Collection
                                      java.util.Map]))))

(defn copy-tree [tr]
  (if (or (= tr ()) (single-valued? tr))
    tr
    (cons (copy-tree (first tr))
          (copy-tree (rest tr)))))

The code works fine, but is there better way to replace Lisp's atom function?

by prosseek at December 19, 2014 02:22 AM

TheoryOverflow

What is the contribution of lambda calculus to the field of theory of computation?

I'm just reading up on lambda calculus to "get to know it". I see it as an alternate form of computation as opposed to the Turing Machine. It's an interesting way of doing things with functions/reductions (crudely speaking). Some questions keep nagging at me though:

  • What's the point of lambda calculus? Why go through all these functions/reductions? What is the purpose?
  • As a result I'm left to wonder: What exactly did lambda calculus do to advance the theory of CS? What were it's contributions that would allow me to have an "aha" moment of understanding the need for its existence?
  • Why is lambda calculus not covered in texts on automata theory? The common route is to go through various automata, grammars, Turing Machines and complexity classes. Lambda calculus is only included in the syllabus for SICP style courses (perhaps not?). But I've rarely seen it be a part of the core curriculum of CS. Does this imply it's not all that valuable? Maybe not and I maybe missing something here?

I'm aware that functional programming languages are based on lambda calculus but I'm not considering that as a valid contribution, since it was created much before we had programming languages. So, really what is the point of knowing/understanding lambda calculus, w.r.t. its applications/contributions to theory?

by PhD at December 19, 2014 02:19 AM

CompsciOverflow

Why is the CPU Involved During Keyboard Echo?

I'm currently studying for a computer science exam, and I've come across a concept that has me somewhat stumped.

When one types a key on the keyboard, an ASCII character is transmitted to the CPU. Upon reception of this character, the CPU outputs the same character to the screen. This process is called echoing. Instead of having the CPU involved, why don’t we simply have this echoing process done within the keyboard/screen unit so that the CPU is free to do other useful work?

Now, intuitively, I feel like this is because there is no defined keyboard/screen unit, and the CPU is the device which is responsible for communicating between the screen and the keyboard, through the interconnection network. However, I feel like the fact that a keyboard/screen unit is mentioned may mean I'm missing an important concept. Is this the case? Why do we involve the CPU in the echo process?

by MMMMMCK at December 19, 2014 02:08 AM

Portland Pattern Repository

Planet Theory

Kinetic $k$-Semi-Yao Graph and its Applications

Authors: Zahed Rahmati, Mohammad Ali Abam, Valerie King, Sue Whitesides
Download: PDF
Abstract: This paper introduces a new proximity graph, called the $k$-Semi-Yao graph ($k$-SYG), on a set $P$ of points in $\mathbb{R}^d$, which is a supergraph of the $k$-nearest neighbor graph ($k$-NNG) of $P$. We provide a kinetic data structure (KDS) to maintain the $k$-SYG on moving points, where the trajectory of each point is a polynomial function whose degree is bounded by some constant. Our technique gives the first KDS for the theta graph (\ie, $1$-SYG) in $\mathbb{R}^d$. It generalizes and improves on previous work on maintaining the theta graph in $\mathbb{R}^2$.

As an application, we use the kinetic $k$-SYG to provide the first KDS for maintenance of all the $k$-nearest neighbors in $\mathbb{R}^d$, for any $k\geq 1$. Previous works considered the $k=1$ case only. Our KDS for all the $1$-nearest neighbors is deterministic. The best previous KDS for all the $1$-nearest neighbors in $ \mathbb{R}^d$ is randomized. Our structure and analysis are simpler and improve on this work for the $k=1$ case. We also provide a KDS for all the $(1+\epsilon)$-nearest neighbors, which in fact gives better performance than previous KDS's for maintenance of all the exact $1$-nearest neighbors.

As another application, we present the first KDS for answering reverse $k$-nearest neighbor queries on moving points in $ \mathbb{R}^d$, for any $k\geq 1$.

December 19, 2014 01:42 AM

A Fire Fighter's Problem

Authors: Rolf Klein, Elmar Langetepe, Christos Levcopoulos University of Bonn, Germany, Institute of Computer Science I, University of Lund, Sweden, Department of Computer Science)
Download: PDF
Abstract: Suppose that a circular fire spreads in the plane at unit speed. A fire fighter can build a barrier at speed $v>1$. How large must $v$ be to ensure that the fire can be contained, and how should the fire fighter proceed? We provide two results. First, we analyze the natural strategy where the fighter keeps building a barrier along the frontier of the expanding fire. We prove that this approach contains the fire if $v>v_c=2.6144 \ldots$ holds. Second, we show that any "spiralling" strategy must have speed $v>1.618$ in order to succeed.

Keywords: Motion Planning, Dynamic Environments, Spiralling strategies, Lower and upper bounds

December 19, 2014 01:42 AM

Compression of high throughput sequencing data with probabilistic de Bruijn graph

Authors: Gaëtan Benoit, Claire Lemaitre, Dominique Lavenier, Guillaume Rizk
Download: PDF
Abstract: Motivation: Data volumes generated by next-generation sequencing technolo- gies is now a major concern, both for storage and transmission. This triggered the need for more efficient methods than general purpose compression tools, such as the widely used gzip. Most reference-free tools developed for NGS data compression still use general text compression methods and fail to benefit from algorithms already designed specifically for the analysis of NGS data. The goal of our new method Leon is to achieve compression of DNA sequences of high throughput sequencing data, without the need of a reference genome, with techniques derived from existing assembly principles, that possibly better exploit NGS data redundancy. Results: We propose a novel method, implemented in the software Leon, for compression of DNA sequences issued from high throughput sequencing technologies. This is a lossless method that does not need a reference genome. Instead, a reference is built de novo from the set of reads as a probabilistic de Bruijn Graph, stored in a Bloom filter. Each read is encoded as a path in this graph, storing only an anchoring kmer and a list of bifurcations indicating which path to follow in the graph. This new method will allow to have compressed read files that also already contain its underlying de Bruijn Graph, thus directly re-usable by many tools relying on this structure. Leon achieved encoding of a C. elegans reads set with 0.7 bits/base, outperforming state of the art reference-free methods. Availability: Open source, under GNU affero GPL License, available for download at this http URL

December 19, 2014 01:41 AM

Introduction to Dynamic Unary Encoding

Authors: Ernst D. Berg
Download: PDF
Abstract: Dynamic unary encoding takes unary encoding to the next level. Every n-bit binary string is an encoding of dynamic unary and every n-bit binary string is encodable by dynamic unary. By utilizing both forms of unary code and a single bit of parity information dynamic unary encoding partitions 2^n non-negative integers into n sets of disjoint cycles of n-bit elements. These cycles have been employed as virtual data sets, binary transforms and as a mathematical object. Characterization of both the cycles and of the cycle spectrum is given. Examples of encoding and decoding algorithms are given. Examples of other constructs utilizing the principles of dynamic unary encoding are presented. The cycle as a mathematical object is demonstrated.

December 19, 2014 01:41 AM

An Algorithm for Online K-Means Clustering

Authors: Edo Liberty, Ram Sriharsha, Maxim Sviridenko
Download: PDF
Abstract: This paper shows that one can be competitive with with the k-means objective while operating online. In this model, the algorithm receives vectors v1,...,vn one by one in arbitrary order. For each vector vi the algorithm outputs a cluster identifier before receiving vi+1. Our online algorithm generates ~O(k) clusters whose k-means cost is ~O(W*) where W* is the optimal k-means cost using k clusters and ~O suppresses poly logarithmic factors. We also show that, experimentally, it is not much worse than k-means++ while operating in a strictly more constrained computational model.

December 19, 2014 01:41 AM

New algorithms and lower bounds for monotonicity testing

Authors: Xi Chen, Rocco A. Servedio, Li-Yang Tan
Download: PDF
Abstract: We consider the problem of testing whether an unknown Boolean function $f$ is monotone versus $\epsilon$-far from every monotone function. The two main results of this paper are a new lower bound and a new algorithm for this well-studied problem.

Lower bound: We prove an $\tilde{\Omega}(n^{1/5})$ lower bound on the query complexity of any non-adaptive two-sided error algorithm for testing whether an unknown Boolean function $f$ is monotone versus constant-far from monotone. This gives an exponential improvement on the previous lower bound of $\Omega(\log n)$ due to Fischer et al. [FLN+02]. We show that the same lower bound holds for monotonicity testing of Boolean-valued functions over hypergrid domains $\{1,\ldots,m\}^n$ for all $m\ge 2$.

Upper bound: We give an $\tilde{O}(n^{5/6})\text{poly}(1/\epsilon)$-query algorithm that tests whether an unknown Boolean function $f$ is monotone versus $\epsilon$-far from monotone. Our algorithm, which is non-adaptive and makes one-sided error, is a modified version of the algorithm of Chakrabarty and Seshadhri [CS13a], which makes $\tilde{O}(n^{7/8})\text{poly}(1/\epsilon)$ queries.

December 19, 2014 01:41 AM

On families of anticommuting matrices

Authors: Pavel Hrubeš
Download: PDF
Abstract: Let $e_{1},\dots, e_{k}$ be complex $n\times n$ matrices such that $e_{i}e_{j}=-e_{j}e_{i}$ whenever $i\not=j$. We conjecture that $\hbox{rk}(e_{1}^{2})+\hbox{rk}(e_{2}^{2})+\cdots+\hbox{rk}(e_{k}^{2})\leq O(n\log n)$, and prove some results in this direction.

December 19, 2014 01:41 AM

Boolean function monotonicity testing requires (almost) $n^{1/2}$ non-adaptive queries

Authors: Xi Chen, Anindya De, Rocco A. Servedio, Li-Yang Tan
Download: PDF
Abstract: We prove a lower bound of $\Omega(n^{1/2 - c})$, for all $c>0$, on the query complexity of (two-sided error) non-adaptive algorithms for testing whether an $n$-variable Boolean function is monotone versus constant-far from monotone. This improves a $\tilde{\Omega}(n^{1/5})$ lower bound for the same problem that was recently given in [CST14] and is very close to $\Omega(n^{1/2})$, which we conjecture is the optimal lower bound for this model.

December 19, 2014 01:40 AM

QuantOverflow

How to calculate stock move probability based on option implied volatility and time to expiration? (Monte Carlo simulation)

I am looking for one line formula ideally in Excel to calculate stock move probability based on option implied volatility and time to expiration?

I have already found a few complex samples which took a full page of data to calculate. Is it possible to simplify this calculation in one line formula with the following variables:

  1. Current stock price
  2. Target Target Price
  3. Calendar Days Remaining
  4. Percent Annual Volatility
  5. Dividend=0, Interest Rate=2%
  6. Random value to get something similar to Monte Carlo model?

I need these results:

  1. Probability of stock being above Target Price in %
  2. Probability of stock being below Target Price in %

similar to optionstrategist.com/calculators/probability

Any recommendations?

by Vtech at December 19, 2014 01:40 AM

/r/netsec

CompsciOverflow

Operation with same asymptotic cost on hash tables and lists

Let $x \in \{ \log n, n, \dots , n!\}$ some (cost) function.

Are there interesting operations with runtime in $O(x)$ on lists which also have runtime in $O(x)$ on hash tables?

by Niels at December 19, 2014 01:36 AM

What is the optimal strategy for filtering a large collection of items with multiple filter functions?

I have a large collection of items, and a list of independent filters (boolean functions). I want to find the collection of items that pass all of my filters as quickly as possible. This must involve looping over every item in the collection and applying each filter to each item. An item will be rejected early if any one of the filters fails during this process.

Each filter has a runtime that varies significantly. And, we know a-priori what percent of the collection each filter will filter out. Given this, how do I find the ordering I should apply my filters to each item with the fasted expected run-time overall?

For example, filter A runs in 5 ms and filters out 50% of the collection on average. Filter B runs in 1 ms and filters out 25% of the collection on average. From this, we know that ordering A,B gives 5 + 0.5 * 1 = 5.5 ms average runtime, and B,A gives 1 + 0.75 * 5 = 4.75 ms average runtime. So B,A is our fastest ordering.

I think this admits a dynamic programming solution (since fastest ordering is runtime of first filter + (1 - filter fraction) * (optimal ordering of the rest), but I was wondering if this problem has a name and has been studied before? Could somebody point me to any papers?

by user3217070 at December 19, 2014 01:30 AM

arXiv Computer Science and Game Theory

A Note on the Assignment Problem with Uniform Preferences. (arXiv:1412.6078v1 [cs.GT])

Motivated by a problem of scheduling unit-length jobs with weak preferences over time-slots, the random assignment problem (also called the house allocation problem) is considered on a uniform preference domain. For the subdomain in which preferences are strict except possibly for the class of unacceptable objects, Bogomolnaia and Moulin characterized the probabilistic serial mechanism as the only mechanism satisfying equal treatment of equals, strategyproofness, and ordinal efficiency. The main result in this paper is that the natural extension of the probabilistic serial mechanism to the domain of weak, but uniform, preferences fails strategyproofness, but so does every other mechanism that is ordinally efficient and treats equals equally. If envy-free assignments are required, then any (probabilistic or deterministic) mechanism that guarantees an ex post efficient outcome must fail even a weak form of strategyproofness.

by <a href="http://arxiv.org/find/cs/1/au:+Sethuraman_J/0/1/0/all/0/1">Jay Sethuraman</a>, <a href="http://arxiv.org/find/cs/1/au:+Ye_C/0/1/0/all/0/1">Chun Ye</a> at December 19, 2014 01:30 AM

On Evolutionarily Stable States and Nash Equilibria that Are Not Characterised by the Folk Theorem. (arXiv:1412.6077v1 [cs.GT])

In evolutionary game theory, evolutionarily stable states are characterised by the folk theorem because exact solutions to the replicator equation are difficult to obtain. It is generally assumed that the folk theorem, which is the fundamental theory for non-cooperative games, defines all Nash equilibria in infinitely repeated games. Here, we prove that Nash equilibria that are not characterised by the folk theorem do exist. By adopting specific reactive strategies, a group of players can be better off by coordinating their actions in repeated games. We call it a level-k equilibrium when a group of players coordinate their actions and they have no incentive to deviate from their strategies simultaneously. The existence and stability of the level-k equilibrium in general games is discussed. This study shows that the set of evolutionarily stable states has greater cardinality than has been considered in many evolutionary games.

by <a href="http://arxiv.org/find/cs/1/au:+Li_J/0/1/0/all/0/1">Jiawei Li</a>, <a href="http://arxiv.org/find/cs/1/au:+Kendall_G/0/1/0/all/0/1">Graham Kendall</a> at December 19, 2014 01:30 AM

A Generalized Cheeger Inequality. (arXiv:1412.6075v1 [cs.DM])

The generalized conductance $\phi(G,H)$ between two graphs $G$ and $H$ on the same vertex set $V$ is defined as the ratio $$

\phi(G,H) = \min_{S\subseteq V} \frac{cap_G(S,\bar{S})}{ cap_H(S,\bar{S})}, $$ where $cap_G(S,\bar{S})$ is the total weight of the edges crossing from $S$ to $\bar{S}=V-S$. We show that the minimum generalized eigenvalue $\lambda(L_G,L_H)$ of the pair of Laplacians $L_G$ and $L_H$ satisfies $$

\lambda(L_G,L_H) \geq \phi(G,H) \phi(G)/8, $$ where $\phi(G)$ is the usual conductance of $G$. A generalized cut that meets this bound can be obtained from the generalized eigenvector corresponding to $\lambda(L_G,L_H)$. The inequality complements a recent proof that $\phi(G)$ cannot be replaced by $\Theta(\phi(G,H))$ in the above inequality, unless the Unique Games Conjecture is false.

by <a href="http://arxiv.org/find/cs/1/au:+Koutis_I/0/1/0/all/0/1">Ioannis Koutis</a>, <a href="http://arxiv.org/find/cs/1/au:+Miller_G/0/1/0/all/0/1">Gary Miller</a>, <a href="http://arxiv.org/find/cs/1/au:+Peng_R/0/1/0/all/0/1">Richard Peng</a> at December 19, 2014 01:30 AM

Exploiting the Structure of Bipartite Graphs for Algebraic and Spectral Graph Theory Applications. (arXiv:1412.6073v1 [cs.DM])

In this article, we extend several algebraic graph analysis methods to bipartite networks. In various areas of science, engineering and commerce, many types of information can be represented as networks, and thus the discipline of network analysis plays an important role in these domains. A powerful and widespread class of network analysis methods is based on algebraic graph theory, i.e., representing graphs as square adjacency matrices. However, many networks are of a very specific form that clashes with that representation: They are bipartite. That is, they consist of two node types, with each edge connecting a node of one type with a node of the other type. Examples of bipartite networks (also called \emph{two-mode networks}) are persons and the social groups they belong to, musical artists and the musical genres they play, and text documents and the words they contain. In fact, any type of feature that can be represented by a categorical variable can be interpreted as a bipartite network. Although bipartite networks are widespread, most literature in the area of network analysis focuses on unipartite networks, i.e., those networks with only a single type of node. The purpose of this article is to extend a selection of important algebraic network analysis methods to bipartite networks, showing that many methods from algebraic graph theory can be applied to bipartite networks with only minor modifications. We show methods for clustering, visualization and link prediction. Additionally, we introduce new algebraic methods for measuring the bipartivity in near-bipartite graphs.

by <a href="http://arxiv.org/find/cs/1/au:+Kunegis_J/0/1/0/all/0/1">J&#xe9;r&#xf4;me Kunegis</a> at December 19, 2014 01:30 AM

Nested Family of Cyclic Games with $k$-total Effective Rewards. (arXiv:1412.6072v1 [cs.DM])

We consider Gillette's two-person zero-sum stochastic games with perfect information. For each $k \in \mathbb{Z}_+$ we introduce a payoff function, called the $k$-total reward. For $k = 0$ and $1$ these payoffs are known as mean payoff and total reward, respectively. We restrict our attention to the deterministic case, the so called cyclic games. For all $k$, we prove the existence of a saddle point which can be realized by pure stationary strategies. We also demonstrate that $k$-total reward games can be embedded into $(k+1)$-total reward games. In particular, all of these classes contain mean payoff cyclic games.

by <a href="http://arxiv.org/find/cs/1/au:+Boros_E/0/1/0/all/0/1">Endre Boros</a>, <a href="http://arxiv.org/find/cs/1/au:+Elbassioni_K/0/1/0/all/0/1">Khaled Elbassioni</a>, <a href="http://arxiv.org/find/cs/1/au:+Gurvich_V/0/1/0/all/0/1">Vladimir Gurvich</a>, <a href="http://arxiv.org/find/cs/1/au:+Makino_K/0/1/0/all/0/1">Kazuhisa Makino</a> at December 19, 2014 01:30 AM

Very Low Cost Entropy Source Based on Chaotic Dynamics Retrofittable on Networked Devices to Prevent RNG Attacks. (arXiv:1412.6067v1 [cs.CR])

Good quality entropy sources are indispensable in most modern cryptographic protocols. Unfortunately, many currently deployed networked devices do not include them and may be vulnerable to Random Number Generator (RNG) attacks. Since most of these systems allow firmware upgrades and have serial communication facilities, the potential for retrofitting them with secure hardware-based entropy sources exists. To this aim, very low-cost, robust, easy to deploy solutions are required. Here, a retrofittable, sub 10$ entropy source based on chaotic dynamics is illustrated, capable of a 32 kbit/s rate or more and offering multiple serial communication options including USB, I2C, SPI or USART. Operation is based on a loop built around the Analog to Digital Converter (ADC) hosted on a standard microcontroller.

by <a href="http://arxiv.org/find/cs/1/au:+Fabbri_M/0/1/0/all/0/1">Mattia Fabbri</a>, <a href="http://arxiv.org/find/cs/1/au:+Callegari_S/0/1/0/all/0/1">Sergio Callegari</a> at December 19, 2014 01:30 AM

Numerical pricing of American options under two stochastic factor models with jumps using a meshless local Petrov-Galerkin method. (arXiv:1412.6064v1 [cs.CE])

The most recent update of financial option models is American options under stochastic volatility models with jumps in returns (SVJ) and stochastic volatility models with jumps in returns and volatility (SVCJ). To evaluate these options, mesh-based methods are applied in a number of papers but it is well-known that these methods depend strongly on the mesh properties which is the major disadvantage of them. Therefore, we propose the use of the meshless methods to solve the aforementioned options models, especially in this work we select and analyze one scheme of them, named local radial point interpolation (LRPI) based on Wendland's compactly supported radial basis functions (WCS-RBFs) with C6, C4 and C2 smoothness degrees. The LRPI method which is a special type of meshless local Petrov-Galerkin method (MLPG), offers several advantages over the mesh-based methods, nevertheless it has never been applied to option pricing, at least to the very best of our knowledge. These schemes are the truly meshless methods, because, a traditional non-overlapping continuous mesh is not required, neither for the construction of the shape functions, nor for the integration of the local sub-domains. In this work, the American option which is a free boundary problem, is reduced to a problem with fixed boundary using a Richardson extrapolation technique. Then the implicit-explicit (IMEX) time stepping scheme is employed for the time derivative which allows us to smooth the discontinuities of the options' payoffs. Stability analysis of the method is analyzed and performed. In fact, according to an analysis carried out in the present paper, the proposed method is unconditionally stable. Numerical experiments are presented showing that the proposed approaches are extremely accurate and fast.

by <a href="http://arxiv.org/find/cs/1/au:+Rad_J/0/1/0/all/0/1">Jamal Amani Rad</a>, <a href="http://arxiv.org/find/cs/1/au:+Parand_K/0/1/0/all/0/1">Kourosh Parand</a> at December 19, 2014 01:30 AM

Local weak form meshless techniques based on the radial point interpolation (RPI) method and local boundary integral equation (LBIE) method to evaluate European and American options. (arXiv:1412.6063v1 [cs.CE])

For the first time in mathematical finance field, we propose the local weak form meshless methods for option pricing; especially in this paper we select and analysis two schemes of them named local boundary integral equation method (LBIE) based on moving least squares approximation (MLS) and local radial point interpolation (LRPI) based on Wu's compactly supported radial basis functions (WCS-RBFs). LBIE and LRPI schemes are the truly meshless methods, because, a traditional non-overlapping, continuous mesh is not required, either for the construction of the shape functions, or for the integration of the local sub-domains. In this work, the American option which is a free boundary problem, is reduced to a problem with fixed boundary using a Richardson extrapolation technique. Then the $\theta$-weighted scheme is employed for the time derivative. Stability analysis of the methods is analyzed and performed by the matrix method. In fact, based on an analysis carried out in the present paper, the methods are unconditionally stable for implicit Euler (\theta = 0) and Crank-Nicolson (\theta = 0.5) schemes. It should be noted that LBIE and LRPI schemes lead to banded and sparse system matrices. Therefore, we use a powerful iterative algorithm named the Bi-conjugate gradient stabilized method (BCGSTAB) to get rid of this system. Numerical experiments are presented showing that the LBIE and LRPI approaches are extremely accurate and fast.

by <a href="http://arxiv.org/find/cs/1/au:+Rad_J/0/1/0/all/0/1">Jamal Amani Rad</a>, <a href="http://arxiv.org/find/cs/1/au:+Parand_K/0/1/0/all/0/1">Kourosh Parand</a>, <a href="http://arxiv.org/find/cs/1/au:+Abbasbandy_S/0/1/0/all/0/1">Saeid Abbasbandy</a> at December 19, 2014 01:30 AM

A Tutorial on Network Security: Attacks and Controls. (arXiv:1412.6017v1 [cs.CR])

With the phenomenal growth in the Internet, network security has become an integral part of computer and information security. In order to come up with measures that make networks more secure, it is important to learn about the vulnerabilities that could exist in a computer network and then have an understanding of the typical attacks that have been carried out in such networks. The first half of this paper will expose the readers to the classical network attacks that have exploited the typical vulnerabilities of computer networks in the past and solutions that have been adopted since then to prevent or reduce the chances of some of these attacks. The second half of the paper will expose the readers to the different network security controls including the network architecture, protocols, standards and software/ hardware tools that have been adopted in modern day computer networks.

by <a href="http://arxiv.org/find/cs/1/au:+Meghanathan_N/0/1/0/all/0/1">Natarajan Meghanathan</a> at December 19, 2014 01:30 AM

Complexity of distance fraud attacks in graph-based distance bounding. (arXiv:1412.6016v1 [cs.CR])

Distance bounding (DB) emerged as a countermeasure to the so-called \emph{relay attack}, which affects several technologies such as RFID, NFC, Bluetooth, and Ad-hoc networks. A prominent family of DB protocols are those based on graphs, which were introduced in 2010 to resist both mafia and distance frauds. The security analysis in terms of distance fraud is performed by considering an adversary that, given a vertex labeled graph $G = (V, E)$ and a vertex $v \in V$, is able to find the most frequent $n$-long sequence in $G$ starting from $v$ (MFS problem). However, to the best of our knowledge, it is still an open question whether the distance fraud security can be computed considering the aforementioned adversarial model. Our first contribution is a proof that the MFS problem is NP-Hard even when the graph is constrained to meet the requirements of a graph-based DB protocol. Although this result does not invalidate the model, it does suggest that a \emph{too-strong} adversary is perhaps being considered (i.e., in practice, graph-based DB protocols might resist distance fraud better than the security model suggests.) Our second contribution is an algorithm addressing the distance fraud security of the tree-based approach due to Avoine and Tchamkerten. The novel algorithm improves the computational complexity $O(2^{2^n+n})$ of the naive approach to $O(2^{2n}n)$ where $n$ is the number of rounds.

by <a href="http://arxiv.org/find/cs/1/au:+Trujillo_Rasua_R/0/1/0/all/0/1">Rolando Trujillo-Rasua</a> at December 19, 2014 01:30 AM

Jamming Based on an Ephemeral Key to Obtain Everlasting Security in Wireless Environments. (arXiv:1412.6014v1 [cs.CR])

Secure communication over a wiretap channel is considered in the disadvantaged wireless environment, where the eavesdropper channel is (possibly much) better than the main channel. We present a method to exploit inherent vulnerabilities of the eavesdropper's receiver to obtain everlasting secrecy. Based on an ephemeral cryptographic key pre-shared between the transmitter Alice and the intended recipient Bob, a random jamming signal is added to each symbol. Bob can subtract the jamming signal before recording the signal, while the eavesdropper Eve is forced to perform these non-commutative operations in the opposite order. Thus, information-theoretic secrecy can be obtained, hence achieving the goal of converting the vulnerable "cheap" cryptographic secret key bits into "valuable" information-theoretic (i.e. everlasting) secure bits. We evaluate the achievable secrecy rates for different settings, and show that, even when the eavesdropper has perfect access to the output of the transmitter (albeit through an imperfect analog-to-digital converter), the method can still achieve a positive secrecy rate. Next we consider a wideband system, where Alice and Bob perform frequency hopping in addition to adding the random jamming to the signal, and we show the utility of such an approach even in the face of substantial eavesdropper hardware capabilities.

by <a href="http://arxiv.org/find/cs/1/au:+Sheikholeslami_A/0/1/0/all/0/1">Azadeh Sheikholeslami</a>, <a href="http://arxiv.org/find/cs/1/au:+Goeckel_D/0/1/0/all/0/1">Dennis Goeckel</a>, <a href="http://arxiv.org/find/cs/1/au:+Pishro_Nik_H/0/1/0/all/0/1">Hossein Pishro-Nik</a> at December 19, 2014 01:30 AM

Montgomery's method of polynomial selection for the number field sieve. (arXiv:1412.6011v1 [cs.CR])

The number field sieve is the most efficient known algorithm for factoring large integers that are free of small prime factors. For the polynomial selection stage of the algorithm, Montgomery proposed a method of generating polynomials which relies on the construction of small modular geometric progressions. Montgomery's method is analysed in this paper and the existence of suitable geometric progressions is considered.

by <a href="http://arxiv.org/find/cs/1/au:+Coxon_N/0/1/0/all/0/1">Nicholas Coxon</a> (INRIA Nancy - Grand Est / LORIA) at December 19, 2014 01:30 AM

The Next 700 Impossibility Results in Time-Varying Graphs. (arXiv:1412.6007v1 [cs.DC])

We address highly dynamic distributed systems modeled by time-varying graphs (TVGs). We interest in proof of impossibility results that often use informal arguments about convergence. First, we provide a distance among TVGs to define correctly the convergence of TVG sequences. Next, we provide a general framework that formally proves the convergence of the sequence of executions of any deterministic algorithm over TVGs of any convergent sequence of TVGs. Finally, we illustrate the relevance of the above result by proving that no deterministic algorithm exists to compute the underlying graph of any connected-over-time TVG, i.e., any TVG of the weakest class of long-lived TVGs.

by <a href="http://arxiv.org/find/cs/1/au:+Braud_Santoni_N/0/1/0/all/0/1">Nicolas Braud-Santoni</a> (TU Graz), <a href="http://arxiv.org/find/cs/1/au:+Dubois_S/0/1/0/all/0/1">Swan Dubois</a> (INRIA), <a href="http://arxiv.org/find/cs/1/au:+Kaaouachi_M/0/1/0/all/0/1">Mohamed-Hamza Kaaouachi</a> (INRIA), <a href="http://arxiv.org/find/cs/1/au:+Petit_F/0/1/0/all/0/1">Franck Petit</a> (INRIA) at December 19, 2014 01:30 AM

Globally Governed Session Semantics. (arXiv:1412.5943v1 [cs.LO])

This paper proposes a bisimulation theory based on multiparty session types where a choreography specification governs the behaviour of session typed processes and their observer. The bisimulation is defined with the observer cooperating with the observed process in order to form complete global session scenarios and usable for proving correctness of optimisations for globally coordinating threads and processes. The induced bisimulation is strictly more fine-grained than the standard session bisimulation. The difference between the governed and standard bisimulations only appears when more than two interleaved multiparty sessions exist. This distinct feature enables to reason real scenarios in the large-scale distributed system where multiple choreographic sessions need to be interleaved. The compositionality of the governed bisimilarity is proved through the soundness and completeness with respect to the governed reduction-based congruence. Finally, its usage is demonstrated by a thread transformation governed under multiple sessions in a real usecase in the large-scale cyberinfrustracture.

by <a href="http://arxiv.org/find/cs/1/au:+Kouzapas_D/0/1/0/all/0/1">Dimitrios Kouzapas</a>, <a href="http://arxiv.org/find/cs/1/au:+Yoshida_N/0/1/0/all/0/1">Nobuko Yoshida</a> at December 19, 2014 01:30 AM

Low-Complexity Cloud Image Privacy Protection via Matrix Perturbation. (arXiv:1412.5937v1 [cs.CR])

Cloud-assisted image services are widely used for various applications. Due to the high computational complexity of existing image encryption technology, it is extremely challenging to provide privacy preserving image services for resource-constrained smart device. In this paper, we propose a novel encrypressive cloud-assisted image service scheme, called eCIS. The key idea of eCIS is to shift the high computational cost to the cloud allowing reduction in complexity of encoder and decoder on resource-constrained device. This is done via compressive sensing (CS) techniques, compared with existing approaches, we are able to achieve privacy protection at no additional transmission cost. In particular, we design an encryption matrix by taking care of image compression and encryption simultaneously. Such that, the goal of our design is to minimize the mutual information of original image and encrypted image. In addition to the theoretical analysis that demonstrates the security properties and complexity of our system, we also conduct extensive experiment to evaluate its performance. The experiment results show that eCIS can effectively protect image privacy and meet the user's adaptive secure demand. eCIS reduced the system overheads by up to $4.1\times\sim6.8\times$ compared with the existing CS based image processing approach.

by <a href="http://arxiv.org/find/cs/1/au:+Wu_X/0/1/0/all/0/1">Xuangou Wu</a>, <a href="http://arxiv.org/find/cs/1/au:+Tang_S/0/1/0/all/0/1">Shaojie Tang</a>, <a href="http://arxiv.org/find/cs/1/au:+Yang_P/0/1/0/all/0/1">Panlong Yang</a> at December 19, 2014 01:30 AM

Games for Active XML Revisited. (arXiv:1412.5910v1 [cs.DB])

The paper studies the rewriting mechanisms for intensional documents in the Active XML framework, abstracted in the form of active context-free games. The safe rewriting problem studied in this paper is to decide whether the first player, Juliet, has a winning strategy for a given game and (nested) word; this corresponds to a successful rewriting strategy for a given intensional document. The paper examines several extensions to active context-free games.

The primary extension allows more expressive schemas (namely XML schemas and regular nested word languages) for both target and replacement languages and has the effect that games are played on nested words instead of (flat) words as in previous studies. Other extensions consider validation of input parameters of web services, and an alternative semantics based on insertion of service call results.

In general, the complexity of the safe rewriting problem is highly intractable (doubly exponential time), but the paper identifies interesting tractable cases.

by <a href="http://arxiv.org/find/cs/1/au:+Schuster_M/0/1/0/all/0/1">Martin Schuster</a>, <a href="http://arxiv.org/find/cs/1/au:+Schwentick_T/0/1/0/all/0/1">Thomas Schwentick</a> at December 19, 2014 01:30 AM

Dense Testers: Almost Linear Time and Locally Explicit Constructions. (arXiv:1412.5889v1 [cs.DM])

We develop a new notion called $(1-\epsilon)$-tester for a set $M$ of functions $f:A\to C$. A $(1-\epsilon)$-tester for $M$ maps each element $a\in A$ to a finite number of elements $B_a=\{b_1,\ldots,b_t\}\subset B$ in a smaller sub-domain $B\subset A$ where for every $f\in M$ if $f(a)\not=0$ then $f(b)\not=0$ for at least $(1-\epsilon)$ fraction of the elements $b$ of $B_a$. I.e., if $f(a)\not=0$ then $\Pr_{b\in B_a}[f(b)\not=0]\ge 1-\epsilon$. The {\it size} of the $(1-\epsilon)$-tester is $\max_{a\in A}|B_a|$ and the goal is to minimize this size, construct $B_a$ in deterministic almost linear time and access and compute each map in poly-log time.

We use tools from elementary algebra and algebraic function fields to build $(1-\epsilon)$-testers of small size in deterministic almost linear time. We also show that our constructions are locally explicit, i.e., one can find any entry in the construction in time poly-log in the size of the construction and the field size. We also prove lower bounds that show that the sizes of our testers and the densities are almost optimal.

Testers were used in [Bshouty, Testers and its application, ITCS 2014] to construct almost optimal perfect hash families, universal sets, cover-free families, separating hash functions, black box identity testing and hitting sets. The dense testers in this paper shows that such constructions can be done in almost linear time, are locally explicit and can be made to be dense.

by <a href="http://arxiv.org/find/cs/1/au:+Bshouty_N/0/1/0/all/0/1">Nader H. Bshouty</a> at December 19, 2014 01:30 AM

Replacing ANSI C with other modern programming languages. (arXiv:1412.5867v1 [cs.CY])

Replacing ANSI C language with other modern programming languages such as Python or Java may be an actual debate topic in technical universities. Researchers whose primary interests are not in programming area seem to prefer modern and higher level languages. Keeping standard language ANSI C as a primary tool for engineers and for microcontrollers programming, robotics and data acquisition courses is another strong different opinion trend. Function oriented versus object oriented languages may be another highlighted topic in actual debates.

by <a href="http://arxiv.org/find/cs/1/au:+Dobrescu_L/0/1/0/all/0/1">Lidia Dobrescu</a> at December 19, 2014 01:30 AM

ConGUSTo: (HT)Condor Graphical Unified Supervising Tool. (arXiv:1412.5847v1 [cs.DC])

HTCondor is a distributed job scheduler developed by the University of Wisconsin-Madison, which allows users to run their applications in other users' machines when they are not being used, thus providing a considerably increase in the overall computational power and a more efficient use of the computing resources. Our institution has been successfully using HTCondor for more than ten years, and HTCondor is nowadays the most used Supercomputing resource we have. Although HTCondor provides a wide range of tools and options for its management and administration, there are currently no tools that can show detailed usage information and statistics in a clear, easy to interpret, interactive set of graphics displays. For this reason, we have developed ConGUSTo, a web-based tool that allows to collect HTCondor usage and statistics data in an easy way, and present them using a variety of tabular and graphics charts.

by <a href="http://arxiv.org/find/cs/1/au:+Dorta_A/0/1/0/all/0/1">Antonio Dorta</a>, <a href="http://arxiv.org/find/cs/1/au:+Caon_N/0/1/0/all/0/1">Nicola Caon</a>, <a href="http://arxiv.org/find/cs/1/au:+Prieto_J/0/1/0/all/0/1">Jorge Andres Perez Prieto</a> at December 19, 2014 01:30 AM

Bounding the Number of Hyperedges in Friendship $r$-Hypergraphs. (arXiv:1412.5822v1 [math.CO])

For $r \ge 2$, an $r$-uniform hypergraph is called a friendship $r$-hypergraph if every set $R$ of $r$ vertices has a unique 'friend' - that is, there exists a unique vertex $x \notin R$ with the property that for each subset $A \subseteq R$ of size $r-1$, the set $A \cup \{x\}$ is a hyperedge.

We show that for $r \geq 3$, the number of hyperedges in a friendship $r$-hypergraph is at least $\frac{r+1}{r} \binom{n-1}{r-1}$, and we characterise those hypergraphs which achieve this bound. This generalises a result given by Li and van Rees in the case when $r = 3$.

We also obtain a new upper bound on the number of hyperedges in a friendship $r$-hypergraph, which improves on a known bound given by Li, van Rees, Seo and Singhi when $r=3$.

by <a href="http://arxiv.org/find/math/1/au:+Gunderson_K/0/1/0/all/0/1">Karen Gunderson</a>, <a href="http://arxiv.org/find/math/1/au:+Morrison_N/0/1/0/all/0/1">Natasha Morrison</a>, <a href="http://arxiv.org/find/math/1/au:+Semeraro_J/0/1/0/all/0/1">Jason Semeraro</a> at December 19, 2014 01:30 AM

The Expressive Power of $\text{DL-Lite}_{R,\sqcap}$. (arXiv:1412.5795v1 [cs.LO])

Description logics are knowledge representation formalisms that provide the formal underpinning of the semantic web and in particular of the $\text{OWL}$ Ontology Web Language. In this paper we investigate the expressive power of logic $\text{DL-Lite}_{R,\sqcap}$, and some of its computational properties. We rely on simulations to characterize the absolute expressive power of $\text{DL-Lite}_{R,\sqcap}$ as a concept language, and to show that disjunction is not expressible. We also show that no simulation-based closure property exists for $\text{DL-Lite}_{R,\sqcap}$ assertions. Finally, we show that query answering of unions of conjunctive queries is $\text{NP-complete}$.

by <a href="http://arxiv.org/find/cs/1/au:+Thorne_C/0/1/0/all/0/1">Camilo Thorne</a> at December 19, 2014 01:30 AM

Optimizing User Association and Spectrum Allocation in HetNets: A Utility Perspective. (arXiv:1412.5731v1 [cs.NI])

The joint user association and spectrum allocation problem is studied for multi-tier heterogeneous networks (HetNets) in both downlink and uplink in the interference-limited regime. Users are associated with base-stations (BSs) based on the biased downlink received power. Spectrum is either shared or orthogonally partitioned among the tiers. This paper models the placement of BSs in different tiers as spatial point processes and adopts stochastic geometry to derive the theoretical mean proportionally fair utility of the network based on the coverage rate. By formulating and solving the network utility maximization problem, the optimal user association bias factors and spectrum partition ratios are analytically obtained for the multi-tier network. The resulting analysis reveals that the downlink and uplink user associations do not have to be symmetric. For uplink under spectrum sharing, if all tiers have the same target signal-to-interference ratio (SIR), distance-based user association is shown to be optimal under a variety of path loss and power control settings. For both downlink and uplink, under orthogonal spectrum partition, it is shown that the optimal proportion of spectrum allocated to each tier should match the proportion of users associated with that tier. Simulations validate the analytical results. Under typical system parameters, simulation results suggest that spectrum partition performs better for downlink in terms of utility, while spectrum sharing performs better for uplink with power control.

by <a href="http://arxiv.org/find/cs/1/au:+Lin_Y/0/1/0/all/0/1">Yicheng Lin</a>, <a href="http://arxiv.org/find/cs/1/au:+Bao_W/0/1/0/all/0/1">Wei Bao</a>, <a href="http://arxiv.org/find/cs/1/au:+Yu_W/0/1/0/all/0/1">Wei Yu</a>, <a href="http://arxiv.org/find/cs/1/au:+Liang_B/0/1/0/all/0/1">Ben Liang</a> at December 19, 2014 01:30 AM

Simulation leagues: Enabling replicable and robust investigation of complex robotic systems. (arXiv:1412.5711v1 [cs.RO])

Physically-realistic simulated environments are powerful platforms for enabling measurable, replicable and statistically-robust investigation of complex robotic systems. Such environments are epitomised by the RoboCup simulation leagues, which have been successfully utilised to conduct massively-parallel experiments in topics including: optimisation of bipedal locomotion, self-localisation from noisy perception data and planning complex multi-agent strategies without direct agent-to-agent communication. Many of these systems are later transferred to physical robots, making the simulation leagues invaluable well-beyond the scope of simulated soccer matches. In this study, we provide an overview of the RoboCup simulation leagues and describe their properties as they pertain to replicable and robust robotics research. To demonstrate their utility directly, we leverage the ability to run parallelised experiments to evaluate different competition formats (e.g. round robin) for the RoboCup 2D simulation league. Our results demonstrate that a previously-proposed hybrid format minimises fluctuations from 'true' (statistically-significant) team performance rankings within the time constraints of the RoboCup world finals. Our experimental analysis would be impossible with physical robots alone, and we encourage other researchers to explore the potential for enriching their experimental pipelines with simulated components, both to minimise experimental costsand enable others to replicate and expand upon their results in a hardware-independent manner.

by <a href="http://arxiv.org/find/cs/1/au:+Budden_D/0/1/0/all/0/1">David M Budden</a>, <a href="http://arxiv.org/find/cs/1/au:+Wang_P/0/1/0/all/0/1">Peter Wang</a>, <a href="http://arxiv.org/find/cs/1/au:+Obst_O/0/1/0/all/0/1">Oliver Obst</a>, <a href="http://arxiv.org/find/cs/1/au:+Prokopenko_M/0/1/0/all/0/1">Mikhail Prokopenko</a> at December 19, 2014 01:30 AM

On the Complexity of Nash Equilibria in Anonymous Games. (arXiv:1412.5681v1 [cs.GT])

We show that the problem of finding an {\epsilon}-approximate Nash equilibrium in an anonymous game with seven pure strategies is complete in PPAD, when the approximation parameter {\epsilon} is exponentially small in the number of players.

by <a href="http://arxiv.org/find/cs/1/au:+Chen_X/0/1/0/all/0/1">Xi Chen</a>, <a href="http://arxiv.org/find/cs/1/au:+Durfee_D/0/1/0/all/0/1">David Durfee</a>, <a href="http://arxiv.org/find/cs/1/au:+Orfanou_A/0/1/0/all/0/1">Anthi Orfanou</a> at December 19, 2014 01:30 AM

Large permutations and parameter testing. (arXiv:1412.5622v1 [cs.DM])

A classical theorem of Erd\H{o}s, Lov\'{a}sz and Spencer asserts that the densities of connected subgraphs in large graphs are independent. We prove an analogue of this theorem for permutations and we then apply the methods used in the proof to give an example of a finitely approximable permutation parameter that is not finitely forcible. The latter answers a question posed by two of the authors and Moreira and Sampaio.

by <a href="http://arxiv.org/find/cs/1/au:+Glebov_R/0/1/0/all/0/1">Roman Glebov</a>, <a href="http://arxiv.org/find/cs/1/au:+Hoppen_C/0/1/0/all/0/1">Carlos Hoppen</a>, <a href="http://arxiv.org/find/cs/1/au:+Klimosova_T/0/1/0/all/0/1">Tereza Klimosova</a>, <a href="http://arxiv.org/find/cs/1/au:+Kohayakawa_Y/0/1/0/all/0/1">Yoshiharu Kohayakawa</a>, <a href="http://arxiv.org/find/cs/1/au:+Kral_D/0/1/0/all/0/1">Daniel Kral</a>, <a href="http://arxiv.org/find/cs/1/au:+Liu_H/0/1/0/all/0/1">Hong Liu</a> at December 19, 2014 01:30 AM

A Simple construction of the Pseudorandom Generator from Permutation. (arXiv:1412.5619v1 [cs.CR])

A simple construction of pseudorandom generator is appear.This pseudorandom generator is always passed by NIST statistical test.This paper reports a pseudorandom number generator which has good property is able to construct using only permutation and data rewriting by XOR.

by <a href="http://arxiv.org/find/cs/1/au:+Terasawa_Y/0/1/0/all/0/1">Yoshihiro Terasawa</a> at December 19, 2014 01:30 AM

Performance Analysis of Geographic Routing Protocols in Ad Hoc Networks. (arXiv:1412.5616v1 [cs.NI])

Geographic routing protocols greatly reduce the requirements of topology storage and provide flexibility in the accommodation of the dynamic behavior of ad hoc networks. This paper presents performance evaluations and comparisons of two geographic routing protocols and the popular AODV protocol. The trade-offs among the average path reliabilities, average conditional delays, average conditional number of hops, and area spectral efficiencies and the effects of various parameters are illustrated for finite ad hoc networks with randomly placed mobiles. This paper uses a dual method of closed-form analysis and simple simulation that is applicable to most routing protocols and provides a much more realistic performance evaluation than has previously been possible. Some features included in the new analysis are shadowing, exclusion and guard zones, and distance-dependent fading.

by <a href="http://arxiv.org/find/cs/1/au:+Torrieri_D/0/1/0/all/0/1">Don Torrieri</a>, <a href="http://arxiv.org/find/cs/1/au:+Talarico_S/0/1/0/all/0/1">Salvatore Talarico</a>, <a href="http://arxiv.org/find/cs/1/au:+Valenti_M/0/1/0/all/0/1">Matthew C. Valenti</a> at December 19, 2014 01:30 AM

QuantOverflow

How to calibrate the Hull-White model using cap prices?

I'm given cap prices and swap rates, and i'm trying to calibrate the Hull-White model to them. I then want to use the model in order to price a swaption.

I know that the model can be calibrated from implied volatilities but, how do you do so with prices?

Is there a way to find the volatilities from the cap prices?

by Sar at December 19, 2014 01:20 AM

TheoryOverflow

Can we verify satisfiability of first order statements via saturation in sub-exponential time?

In first order logic, we can prove satisfiability several ways: Finite model generation, truthful monadic abstractions, and also saturation. With finite model generation techniques, we can verify the model directly. With saturation we saturate the search space, running out of inferences, thus proving that a model exists.

How can we verify satisfiability demonstrated through saturation without rerunning the entire search, which might take quite a long time? Satisfiability demonstrated through saturation might be an infinite model, so model generation may not be sufficient.

by dezakin at December 19, 2014 01:14 AM

StackOverflow

Task not serializable: java.io.NotSerializableException when calling function outside closure only on classes not objects

Getting strange behavior when calling function outside of a closure:

  • when function is in a object everything is working
  • when function is in a class get :

    Task not serializable: java.io.NotSerializableException: testing
    

The problem is I need my code in a class and not an object. Any idea why this is happening? Is a Scala object serialized (default?)?

This is a working code example:

 object working extends App {
    val list = List(1,2,3)

    val rddList = Spark.ctx.parallelize(list)
    //calling function outside closure 
    val after = rddList.map(someFunc(_))

    def someFunc(a:Int)  = a+1

    after.collect().map(println(_))
  }

This is the non-working example :

  object NOTworking extends App {
     new testing().doIT
  }
  //adding extends Serializable wont help
  class testing {

    val list = List(1,2,3)

    val rddList = Spark.ctx.parallelize(list)

    def doIT =  {
      //again calling the fucntion someFunc 
      val after = rddList.map(someFunc(_))
      //this will crash (spark lazy)
      after.collect().map(println(_))
    }

    def someFunc(a:Int) = a+1

  }

by Nimrod007 at December 19, 2014 01:06 AM

How can I make nrepl-ritz-jack-in work remotely over TRAMP / Emacs

What I want:

I have a clojure program on a remote site, let's call it mccarthy. What I want to do is connect to a nrepl-ritz from my laptop, preferably using nrepl-ritz-jack-in. The jack in works fine for a local program, but doesn't seem to connect to a remote program.

Attempt 1

C-x C-f on /mccarthy:code/program/project.clj

(require 'nrepl-ritz)

M-x nrepl-ritz-jack-in

Result

Emacs appears to hang. If I go to the *nrepl-server* buffer, I see this:

Exception in thread "main" java.lang.reflect.InvocationTargetException
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:601)
    at org.flatland.drip.Main.invoke(Main.java:117)
    at org.flatland.drip.Main.start(Main.java:88)
    at org.flatland.drip.Main.main(Main.java:64)
Caused by: java.lang.AssertionError: Assert failed: project
    at leiningen.ritz_nrepl$start_jpda_server.invoke(ritz_nrepl.clj:23)
    at leiningen.ritz_nrepl$ritz_nrepl.doInvoke(ritz_nrepl.clj:95)

(and tons of other lines, too...)

I am using drip on my laptop, but not on mccarthy, so clearly nrepl-ritz-jack-in is not detecting that it's a remote file. Regular old nrepl-jack-in will work as expected in this case, however.

Attempt 2

I also tried starting an nrepl-ritz using lein on mccarthy:

mattox@mccarthy$ lein ritz-nrepl
nREPL server started on port 42874

From my laptop I forward a port so local 42874 connects to 42874 on mccarthy:

ssh -L 42874:localhost:42874 -N mccarthy

Then, from my local Emacs:

(require 'nrepl-ritz)

M-x nrepl

Host: 127.0.0.1

Port: 42874

This gives me a connection:

; nREPL 0.1.7-preview
user> 

So to test it out, I run

M-x nrepl-ritz-threads

It gives me a nice table of threads.

M-x nrepl-ritz-break-on-exception

user> (/ 1 0)

Result

This hangs, but sometimes shows a hidden debugging buffer with some restarts available. If I tell it to pass the exception back to the program, it never gives control back to the REPL.

I've done plenty of searches but have not been able to get anything more specific than "make sure lein is on your path" (And I did do that, on both machines...).

by MattoxBeckman at December 19, 2014 01:05 AM

Halfbakery

Portland Pattern Repository

/r/compsci

StackOverflow

How to use play-plugins-mailer with Play 2.3 and Scala 2.11?

I am trying to use the play plugin for sending emails:

https://github.com/playframework/play-mailer

I have followed the instructions as found on github: added the dependency to build.sbt, created play.plugins with the specified content (do I need to register the file somehow)?

but I get a compilation error:

object mailer is not a member of package play.api.libs

when trying to import

import play.api.libs.mailer._

I get another compilation error on

val mail = use[MailerPlugin].email

MailerPlugin and use are not found.

How to get this working?

Note: the plugin is correctly downloaded (I can find it in my .ivy2 directory), but it is not listed as a dependency in my application.

My build.sbt file:

name := ...

version := "1.0-SNAPSHOT"

scalaVersion := "2.11.2"

resolvers += Resolver.typesafeRepo("releases")

//"mysql" % "mysql-connector-java" % "5.1.31"
libraryDependencies ++= Seq(
  "mysql" % "mysql-connector-java" % "5.1.24",
  "org.webjars" %% "webjars-play" % "2.3.0-2",
  "com.typesafe.play" %% "play-slick" % "0.8.0",
  "com.typesafe.play.plugins" %% "play-plugins-mailer" % "2.3.1",
  "org.mindrot" % "jbcrypt" % "0.3m"
)

fork in Test := false

lazy val root = (project in file(".")).enablePlugins(PlayScala)

And my play.plugins contains only:

1500:com.typesafe.plugin.CommonsMailerPlugin

UPDATE: I've downloaded the sample project from https://github.com/playframework/play-mailer and tried to compile using sbt. It failed with exactly the same problem.

by jfu at December 19, 2014 12:31 AM

CompsciOverflow

How many cookies in the cookie box? -- Tiling stars

With holiday season coming up I decided to make some cinnamon stars. That was fun (and the result tasty), but my inner nerd cringed when I put the first tray of stars in the box and they would not fit in one layer:

enter image description here

Almost! Is there a way they could have fit? How well can we tile stars, anyway? Given that these are regular six-pointed stars, we could certainly use the well-known hexagon tilings as an approximation, like so:

enter image description here
Messed up the one to the upper right, whoops.

But is this optimal? There's plenty of room between the tips.

For this consideration, let us restrict ourselves to rectangular boxes and six-pointed, regular stars, i.e. there are thirty degrees (or $\frac{\pi}{6}$) between every tips and its neighbour nooks. The stars are characterised by the inner radius $r_i$ and outer radius $r_o$:

enter image description here
[source]

Note that we have hexagons for $r_i = \frac{\sqrt{3}}{2} \cdot r_o$ and hexagrams for $r_i = \frac{1}{\sqrt{3}} \cdot r_o$. I think it's reasonable to consider these the extremes (for cookies) and restrict ourselves to the range in between, i.e. $\frac{r_i}{r_0} \in \Bigl[\frac{1}{\sqrt{3}}, \frac{\sqrt{3}}{2}\Bigr]$.

My cookies have $r_i \approx 17\mathrm{mm}$ and $r_o \approx 25\mathrm{mm}$ ignoring imperfections -- I was going for taste, not form for once!

What is an optimal tiling for stars as characterised above? If there is no static best tiling, is there an algorithm to find a good one efficiently?

by Raphael at December 19, 2014 12:27 AM

Lobsters

Lambda the Ultimate Forum

Snakes all the way down

Virtual machines (VMs) emulating hardware devices are gen- erally implemented in low-level languages for performance reasons. This results in unmaintainable systems that are difficult to understand. In this paper we report on our experience using the PyPy toolchain to improve the portability and reduce the complexity of whole-system VM implementations. As a case study we implement a VM prototype for a Nintendo Game Boy, called PyGirl , in which the high-level model is separated from low-level VM implementation issues. We shed light on the process of refactoring from a low-level VM implementation in Java to a high-level model in RPython. We show that our whole-system VM written with PyPy is significantly less complex than standard imple- mentations, without substantial loss in performance.
* We show how the execution and implementation details of WSVMs are separated in the same way as those of HLLVMs.
* We show how the use of preprocessing-time meta-programming minimizes the code and decreases the complexity.
* We provide a sample implementation of a WSVM prototype for PyPy which exhibits a simplified implementation without substantial loss of performance (about 40% compared to a similar WSVM in Java).
(groan, since when did Java become the gold standard for "fast"? I know, I know, "with a sufficiently advanced JIT...") (I (sincerely, seriously) apologize if this is not nearly theoretical enough for a forum like LtU.)

December 19, 2014 12:23 AM

/r/scala

[JOB] Want to use scala to control open-source drones? We're hiring a senior scala dev...

Hi ya'll,

We haven't posted this req yet on our website, but I'm one of the scala geeks at 3D Robotics. We are hiring a developer for a really interesting project in our Berkeley, CA, USA office. Nice folks, lots of fun robots/drones to control/write software for and a big sister open-source project.

If you are interested please send me a note and I'd be happy to put you in touch with the right folks...

submitted by punkgeek
[link] [6 comments]

December 19, 2014 12:19 AM

HN Daily

December 18, 2014

CompsciOverflow

Number of nodes to be expanded for DFS, BFS and Iterative-deepening search

I was wondering given depth d and branching factor b and maximum depth m, what would be the minimum and maximum nodes to be expanded for BFS, DFS and iterative-deepening search with cut off d with increment of 1.

I can work out the maximum but am confused on the minimum node.

BFS: $b^d$ as worst case is you have look at all the lead nodes and this means looking at all the branches at the final depth level.

DFS: $bm$ as worst case is you have to look at all the branches at the maximum depth of the tree.

Iterative-deepening: I am a bit unsure on this, I think is $bd$ and each time you increment the d by 1 as you will need to look at all the nodes.

What would the minimum be though?

This is just a sample question I found online while I was studying for a AI exam. I have absolutely no idea what to do for the minimum.

As I think about it it seems the best case for BFS, DFS and iterative-deepening is that it finds the solution on the first leaf node but I have no idea how to represent that.

by orange at December 18, 2014 11:52 PM

Complexity Analysis for a nested loop with two methods [duplicate]

This question already has an answer here:

Hey I am studying for my intro algorithms class final and I'm not sure if I'm understanding this question correctly (its from a sample final exam). If someone could explain this to me that would be awesome.

The following code processes A which is an n-by-n matrix of ints. The method nextInt() is O(1), and the method findMax() is O(n). What is the complexity of the given code as a function of the problem size n? Show the details of your analysis.

for(int i = 0; i < n; i++){

    for(int j = 0; j < n; j++)
        A[i][j] = random.nextInt(); 

    System.out.println(findMax(A[i]));
}

Without the methods the for loop complexity is O(n^2). Now the random.nextInt() complexity is O(1) but it is run n^n time does this effect the complexity of this. Sorry I'm a little sleep deprive. If someone can help me answer this question that would be awesome

by arberb at December 18, 2014 11:48 PM

/r/netsec

StackOverflow

Scala 2.11 LinkedList is deprecated, what should I use?

According to the docs, scala.collection.mutable.LinkedList is deprecated as of the 2.11 version. Unfortunately I have found nothing to replace it with. I need an ordered collection that can remove an item from any index in constant time.

What should I use?

by David Frank at December 18, 2014 11:39 PM

CompsciOverflow

How to enumerate minimal covers of a set

I have a set $S$ and a set $P = \{P_{1},...,P_{n}\}$ with $\bigcup P_{i} = S$. I want to find all the inclusion-minimal subsets of $P$ that are covers of $S$.

What is the best algorithm for enumerating all the inclusion-minimal covers of $S$ contained in a set $P$ and what is its running time?

by Authary at December 18, 2014 11:39 PM

TheoryOverflow

Is there notation for converting a multi-set to a set?

Suppose we have a multi-set $S$. For example, $S = \{ 1,2,2,3 \}$. Suppose we also have a set $T$, e.g., $T=\{1,2,3\}$. I would like to say, compactly, that $S$, when its duplicates are removed, is equal to $T$. Is there a simple, standard notation to do this?

by Paul Medvedev at December 18, 2014 11:36 PM

/r/scala

TheoryOverflow

Quantum Computing & Ray Tracing Rendering Engines

As a non-expert in any of these fields, but out of interest, I have been looking into basic concepts of quantum computing. And I was wondering, taking the concept of ray tracing and rendering engines, could the two be combined?

The two ideas seem compatible: Rendering engines work by tracing samples of light as they bounce off objects, branching into more rays (in the case of branched ray tracing). Quantum computing unlocks the ability to 'sample all possibilities at once' and unify them somehow at the end. Are they compatible? Imagine the implications if rendering engines could take samples in parallel…

From my naïve understanding of quantum computing, the steps that are required to harnessing the power of QM is essentially:

  1. Create a scenario where the result can be any in range of possible outcomes. If handled properly, this results in a superposition being created that embraces all the possibilities. (This is where the ‘QM magic’ begins.)
  2. Perform operations and calculations on the superposition as if it were any single value, but without ever measuring it or doing anything else destructive to the superposition. Doing this essentially applies the operations to all of the possibilities, in parallel. (This is where the ground-shattering power of quantum computing comes from.)
  3. Collapse the superposition in such a way that you get an 'average' of the superpositions, similar to if you calculated all the possibilities in series and averaged the result. (This is the really hard part.)

My question is, how accurate is my conceptual understanding of quantum computing? And more to the point, could we make quantum ray tracing algorithms that harness QM like this, if we had the quantum computers? Would this unleash mankind's ancient dream of achieving real-time computer graphics? Or is it really not that simple?

(PS. Not too much super-intellegent hyper mathematics, please… just a soft question.)

by Joseph at December 18, 2014 11:32 PM

Lambda the Ultimate Forum

You got your Monads in my FOP/AOP.

I'm reading a book on Feature Oriented Programming, and it seems to me that FOP/AOP suffer because it is hard to get at the join points easily in standard crappy languages. Side-effects and nested calls seemed to me to be two of the big trouble makers. So I figured if we did things in a purely functional maybe monadic or command/continuation passing style, then we'd be able to intercept things w/out restriction. In fact, couldn't we transform sad bad imperative code into such a form?

What is sad to me is that, judging by a very brief look at the citations, there isn't a community of like-minded thinkers all getting together to advance this cause, which seems like it could be a very worthy one. I mean, it seems like one could commercialize this if one could take crappy C/++/Java/whatever code and perform maintenance magic on it.

(Some of the citations mention people who are (or have been) active on LtU, of course since LtU is the bee's (bees'?) knees.)

A very small, very random, sampling of the papers out there:

* A monadic interpretation of execution levels and exceptions for AOP
* Effective Aspects: A Typed Monadic Embedding of Pointcuts and Advice
* Monads as a theoretical foundation for AOP
* blog post AOP is... (monads)

et. al.

December 18, 2014 11:26 PM

/r/emacs

StackOverflow

Clojure ring wrap-json-params messing up JSON arrays

I'm currently doing some REST API stuff in clojure, and I am using the ring.middleware.format library with compojure to transform JSON to and from clojure data structures.

I am having a huge issue, in that and JSON posted to the ring app will have all arrays replaced with the first item that was in the array. I.E. it will turn this JSON posted to it from

{
    "buyer":"Test Name",
    "items":[
        {"qty":1,"size":"S","product":"Red T-Shirt"},
        {"qty":1,"size":"M","product":"Green T-Shirt"}
    ],
    "address":"123 Fake St",
    "shipping":"express"
}

to this

{
    "buyer": "Test Name",
    "items": {
        "qty": 1,
        "size": "M",
        "product": "Green T-Shirt"
    },
    "address": "123 Fake St",
    "shipping": "express"
}

It does it for any arrays, including when an array is the root element.

I am using the following code in clojure to return the json:

(defroutes app-routes
  (GET "/"
       []
       {:body test-data})
  (POST "/"
        {data :params}
        {:body data}))
        ;{:body (str "Printing " (count (data :jobs)) " jobs")}))

(def app
  (-> (handler/api app-routes)
      (wrap-json-params)
      (wrap-json-response)))

The GET route has no issues with arrays and outputs properly, so it has to be either the way I am getting the data or the wrap-restful-params middleware.

Any ideas?

by Tom Brunoli at December 18, 2014 11:06 PM

underscorejs map with jquery

I am trying to use underscorejs contribe (http://documentcloud.github.io/underscore-contrib/) and jQuery to construct objects. My code looks like this

1.  var rightMap = _.rcurry2(_.map);
2.  var tabKeys = ['tab1', 'tab2', 'tab3', 'tab4'],
3.      rootElements = _.pipeline(
4.                         rightMap(function(k){return '#' + k;}),
5.                         rightMap(function(k){return $(k);})  //Here is what confused me
6.                     )(tabKeys);
//expected result: [$('#tab1'), $('#tab2'), $('#tab3'),$('#tab4')];

this code works as expected. However, I am not happy with line 5. I wanted to replace line 5 with

  rightmap($) 

this attempt broke the code. Only $('#tab1') was created. I was getting

  [$('#tab1'), [], [],[]];

as the result. I am wondering what is the differences between theses two pieces of code.

Thanks.

Update: I have added the link to the library I am using.

Update: add jsfiddle result http://jsfiddle.net/qbzuduom/1/ is the working situation http://jsfiddle.net/qbzuduom/2/ is the one with problem.

please press f12 in your browser and look at the console to see the difference.

What made the difference in the code?

For the down voter of this question. Can I ask why?

by Wei Ma at December 18, 2014 10:51 PM

AWS

AWS Support - Now Hiring!

Now Hiring
As is the case with many parts of AWS, the team behind AWS Support is growing fast and is looking for top-notch people to fill a multitude of open positions. Here are some of the positions that they are working to fill (click through to apply or to read a detailed job description):

Cloud Support Engineer - (Dallas, Texas) - You get to field, troubleshoot, and manage technical customer issues via phone, chat and email. You help to recreate customer issues and build proof-of-concept applications, and you represent the voice of the customer to internal AWS teams. You can share your knowledge with the AWS community by creating tutorials, how-to videos, and technical articles.

Big Data Devops Support Engineer (Dallas, Texas) - This position is similar to the previous one, but you'll need to have some experience with popular Big Data tools such as Amazon Elastic MapReduce, Zookeeper, HBase, HDFS, Pig, or Hive. If you know what ETL is all about and can apply them using Hadoop, that's even better!

Cloud Support Engineer-Deployment (Dallas, Texas) - This position is similar to the first one, with a focus on development and deployment. You'll need experience with Java, .NET, Node.JS, PHP, Python, or Ruby, familiarity with DevOps principles, and should be able to work with all tiers of the application stack from the OS on up.

Cloud Technical Account Manager (Dallas, Texas) - In this role you will work directly with customers to build mindshare and broad usage of AWS within their organizations. You will be their primary technical contact for one or more customers and you'll get to help them to plan, debug, and monitor their cloud-based, business-critical applications. You will get to help them to scale for world-class events and you'll represent the customers' needs to the rest of the AWS team.

Senior Cloud Technical Account Manager (Dallas, Texas) - This is a more senior version of the previous position! It requires significant IT, AWS, and distributed systems expertise and experience.

Operations Manager-AWS Support (Dallas, Texas) - In this role you will use your operational, leadership, and technical skills to lead a team of Cloud Support Engineers.

Software Development Engineer, Kumo Development Team (Seattle, Washington) - In this role you will help to build the next generation of CRM systems to help us to improve the overall support experience for AWS customers. Experience with data mining, information retrieval, and text analysis is a definite plus for this position.

Senior Manager, Product Management, Amazon Web Services (Seattle, Washington) - In this role you will be responsible for creating the vision and the product strategy for AWS Support's products and services. You'll get to manage the entire product life cycle, starting with strategic planning all the way through to tactical execution. You will need a strong product management track record and you'll need to show us that you know how to translate customer needs in to features, pricing models, and merchandising opportunities.

Senior Product Manager, Amazon Web Services (Seattle, Washington) - In this role you will create marketing materials for support offerings, define and manage marketing programs, think about produce and service positioning, and champion the needs of our customers. You'll need a strong marketing background, ideally with experience in the IT industry.

Senior Technical Program Manager (Seattle, Washington) - In this role you will lead product initiatives, working closely with customers, development teams, vendors, partners, and the AWS service teams. You'll need program management or project management skills and a strong technical background!

Even More Positions
The positions that I listed above are based in Dallas and Seattle. If you are interested in similar positions in other cities and countries, please check out these links:

More About Support
To learn more about AWS Support, watch this video:

Candidates often ask me for special insider tips that will help them to navigate the Amazon hiring process! My answer is always the same -- spend some time studying the Amazon Leadership Principles and make sure that you can relate them to significant events in your career and in your personal life.

-- Jeff;

by Jeff Barr (awseditor@amazon.com) at December 18, 2014 10:46 PM

StackOverflow

Is ActionFunction Asynchronous by default in Play Framework 2.3.x?

Reading the play documentation for scala they say that Actions are asynchronous by default. It is the same when you compose actions with ActionFunction and derivatives like ActionBuilder ?

I mean when you do something like this:

class AuthenticatedDbRequest[A](val user: User,
                                val conn: Connection,
                                request: Request[A]) extends WrappedRequest[A](request)

object Authenticated extends ActionBuilder[AuthenticatedDbRequest] {
  def invokeBlock[A](request: Request[A], block: (AuthenticatedDbRequest[A]) => Future[Result]) = {
    AuthenticatedBuilder(req => getUserFromRequest(req)).authenticate(request, { authRequest: AuthenticatedRequest[A, User] =>
      DB.withConnection { conn =>
        block(new AuthenticatedDbRequest[A](authRequest.user, conn, request))
      }
    })
  }
}

Where the block could potentially block for a long time, is invokeBlock executed asynchronously?

by nowxue at December 18, 2014 10:27 PM

/r/netsec

StackOverflow

Scala - byte array of UTF8 strings

I have a byte array (or more precisely a ByteString) of UTF8 strings, which are prefixed by their length as 2-bytes (msb, lsb). For example:

val z = akka.util.ByteString(0, 3, 'A', 'B', 'C', 0, 5, 
        'D', 'E', 'F', 'G', 'H',0,1,'I')

I would like to convert this to a list of strings, so it should similar to List("ABC", "DEFGH", "I").

Is there an elegant way to do this?

(EDIT) These strings are NOT null terminated, the 0 you are seeing in the array is just the MSB. If the strings were long enough, the MSB would be greater than zero.

by Will I Am at December 18, 2014 10:26 PM

Lobsters

StackOverflow

How to turn a known structured RDD to Vector

Assuming I have an RDD containing (Int, Int) tuples. I wish to turn it into a Vector where first Int in tuple is the index and second is the value.

Any Idea how can I do that?

by Noam Shaish at December 18, 2014 10:20 PM

/r/netsec

StackOverflow

In Scala find files that match a wildcard String

How to obtain an Array[io.BufferedSource] to all files that match a wildcard in a given directory ?

Namely, how to define a method io.Source.fromDir such that

val txtFiles: Array[io.BufferedSource] = io.Source.fromDir("myDir/*.txt") // ???

Noticed FileUtils in Apache Commons IO, yet much preferred is a Scala API based approach without external dependencies.

by enzyme at December 18, 2014 10:12 PM

/r/netsec

StackOverflow

Sanitising database inputs in Clojure with Korma

I'm using Korma behind a RESTful API, and it occurs to me that I'm passing user-submitted values through to my (insert)calls. Is there a nice way in Clojure to protect against SQL injection attacks? Korma generates SQL in a pretty straightforward way, so if somebody told me their name was little Bobby Tables, I'm fearful that it would hurt.

by Conan at December 18, 2014 10:03 PM

Hadoop : java.io.IOException: Pass a Delete or a Put

I'm trying to make CopyTable with Scala implementation based on http://hbase.apache.org/book/mapreduce.example.html#mapreduce.example.readwrite

Here's example my code, is there anyway better than doing like this ?

package com.example

import org.apache.hadoop.hbase.HBaseConfiguration
import org.apache.hadoop.hbase.client.HBaseAdmin
import org.apache.hadoop.hbase.client.HTable
import org.apache.hadoop.hbase.util.Bytes
import org.apache.hadoop.hbase.client.Put
import org.apache.hadoop.hbase.client.Get
import java.io.IOException
import org.apache.hadoop.conf.Configuration
import org.apache.hadoop.hbase._
import org.apache.hadoop.hbase.client._
import org.apache.hadoop.hbase.io._
import org.apache.hadoop.hbase.mapreduce._
import org.apache.hadoop.io._
import org.apache.hadoop.mapreduce._
import scala.collection.JavaConversions._

case class HString(name: String) {
        lazy val bytes = name.getBytes
        override def toString = name
}
object HString {
        import scala.language.implicitConversions
        implicit def hstring2String(src: HString): String = src.name
        implicit def hstring2Bytes(src: HString): Array[Byte] = src.bytes
}

object Families {
        val stream = HString("stream")
        val identity = HString("identity")
}
object Qualifiers {
        val title = HString("title")
        val url = HString("url")
        val media = HString("media")
        val media_source = HString("media_source")
        val content = HString("content")
        val nolimitid_timestamp = HString("nolimitid.timestamp")
        val original_id = HString("original_id")
        val timestamp = HString("timestamp")
        val date_created = HString("date_created")
        val count = HString("count")
}
object Tables {
        val rawstream100 = HString("raw_stream_1.0.0")
        val rawstream = HString("rawstream")
}

class tmapper extends TableMapper[ImmutableBytesWritable, Put]{
  def map (row: ImmutableBytesWritable, value: Result, context: Context) {
    val put = new Put(row.get())
    for (kv <- value.raw()) {
        put.add(kv)
    }
    context.write(row, put)
  }
}

object Hello {
  val hbaseMaster = "127.0.0.1:60000"
  val hbaseZookeper = "127.0.0.1"
  def main(args: Array[String]): Unit = {
        val conf = HBaseConfiguration.create()
    conf.set("hbase.master", hbaseMaster)
    conf.set("hbase.zookeeper.quorum", hbaseZookeper)
    val hbaseAdmin = new HBaseAdmin(conf)

    val job = Job.getInstance(conf, "CopyTable")
    job.setJarByClass(classOf[Hello])
    job.setMapperClass(classOf[tmapper])
    job.setMapOutputKeyClass(classOf[ImmutableBytesWritable])
    job.setMapOutputValueClass(classOf[Result])
    //
    job.setOutputKeyClass(classOf[ImmutableBytesWritable])
    job.setOutputValueClass(classOf[Put])

        val scan = new Scan()
        scan.setCaching(500)         // 1 is the default in Scan, which will be bad for MapReduce jobs
        scan.setCacheBlocks(false)   // don't set to true for MR jobs

        TableMapReduceUtil.initTableMapperJob(
          Tables.rawstream100.bytes,     // input HBase table name
          scan,                      // Scan instance to control CF and attribute selection
          classOf[tmapper],  // mapper class
          null,             // mapper output key class
          null,     // mapper output value class
          job
        )

        TableMapReduceUtil.initTableReducerJob(
          Tables.rawstream,          // Table name
          null, // Reducer class
          job
        )
        val b = job.waitForCompletion(true);
        if (!b) {
            throw new IOException("error with job!");
        }
  }
}

class Hello {}

Thank you again

by ans4175 at December 18, 2014 10:01 PM

Fefe

Nicht nur die Polizei kennt die Gesetze gelegentlich ...

Nicht nur die Polizei kennt die Gesetze gelegentlich nicht. Der Standard hat einen schönen Fall (den Platz 1 auf der Liste) gefunden. Da geht es um einen "ranghohen Justizwachebeamten", der beim Einbruch in eine Diskothek auf frischer Tat ertappt wurde. Gegen den wurde auch keine Anklage erhoben, mit dieser denkwürdigen Begründung:
Als Begründung wird auf das offizielle Gerichtsgutachten des Klagenfurter Psychiaters Dr. Walter Wagner verwiesen, in dem dieser dem Einbrecher bescheinigt "nicht in der Lage gewesen zu sein, das Strafbare der Tat zu bedenken", da er zuvor "online in seine Konten Einblick genommen und dort 'nur rot' gesehen habe".
Na DANN ist das natürlich was ganz Anderes. Wer wäre da nicht unzurechnungsfähig!1!!

Aber das war nur die Hälfte der Beweisführung. Die andere Hälfte war: Er hätte dort nicht mal annähernd genug entwenden können, um seine Schulden zu decken. Fazit des Standard:

Falls Sie die Aussicht auf juristische Konsequenzlosigkeit bei der Planung Ihres nächsten Einbruchs berücksichtigen wollen, beachten Sie also Folgendes: Zuerst das Konto abfragen und dann nicht schuldendeckend stehlen, besser beim Greißler eindübeln als beim Juwelier.
Dem ist nichts hinzuzufügen.

December 18, 2014 10:00 PM

Die gute Nachricht: Die Uno-Vollversammlung hat mit ...

Die gute Nachricht: Die Uno-Vollversammlung hat mit großer Mehrheit eine Resolution verabschiedet, die den Sicherheitsrat auffordert, diese notorischen Folterknechte endlich vor den Internationalen Strafgerichtshof zu zerren.

Die schlechte Nachricht: Wie? Nein, nicht die CIA. Nordkorea!1!!

Können die sowas nicht im Sommer machen, damit wenigstens das Wetter nicht auch noch deprimierend ist? Moan ey!

Und der Spiegel hat die Stirn, da auch noch diesen Satz reinzuschreiben:

Die Uno-Vollversammlung drängt darauf, dem Regime den Prozess zu machen und fordert Schritte vom Sicherheitsrat - aber dort hat China Vetorecht.
Ich hoffe mal, dass man daraus wenigstens einen Präzedenzfall gegen die CIA drehen kann danach. *KOTZ*

December 18, 2014 10:00 PM

StackOverflow

How to read from TCP and write to stdout?

I'm failing to get a simple scalaz-stream example running, reading from TCP and writing to std out.

val src = tcp.reads(1024)
val addr = new InetSocketAddress(12345)
val p = tcp.server(addr, concurrentRequests = 1) {
  src ++ tcp.lift(io.stdOutLines)
}
p.run.run

It just sits there, not printing anything.

I've also tried various arrangements using to, always with the tcp.lift incantation to get a Process[Connection, A], including

tcp.server(addr, concurrentRequests = 1)(src) map (_ to tcp.lift(io.stdOutLines))

which doesn't even compile.

Do I need to wye the source and print streams together? An example I found on the original pull request for tcp replacing nio seemed to indicate this, but wye no longer appears to exist on Process, so confusion reigns unfortunately.

by Joe Kearney at December 18, 2014 09:52 PM

Planet FreeBSD

The Short List #8: fetchmailrc/gmail/ssl … grrr #FreeBSD

Didn’t realize that a fetchmail implementation I was using was actually *not* using SSL for a month.  I had installed security/ca_root_nss but FreeBSD doesn’t assume that you want to use the certificates in this package.  I don’t understand it, but whatever.

So, add this to your fetchmailrc to actually use the certificate authorities in there and really do SSL to your gmail account:

sslcertfile /usr/local/share/certs/ca-root-nss.crt

by Sean at December 18, 2014 09:48 PM

Planet Theory

Quick comments on the NIPS experiment

[One can tell it’s reviewing and letter-writing season when I escape to blogging more often..]

There’s been some discussion on the NIPS experiment, enough of it that even my neuro-scientist brother sent me a link to Eric Price’s blog post. The gist of it is that the program chairs duplicated the reviewing process for 10% of the papers, to see how many would get inconsistent decisions, and it turned out that 25.9% of them did (one of the program chairs predicted that it would be 20% and the other that it would be 25%, see also herehere and here). Eric argues that the right way to measure disagreement is to look at the fraction of papers that process A accepted that would have been rejected by process B, which comes out to more than 50%.

It’s hard for me to interpret this number. One interpretation is that it’s a failure of the refereeing process that people can’t agree on more than half of the list of accepted papers. Another viewpoint is that since the disagreement is not much larger than predicted beforehand, we shouldn’t be that surprised about it. It’s tempting when having such discussions to model papers as having some inherent quality score, where the goal of the program committee is to find all papers above a certain threshold. The truth is that different papers have different, incomparable qualities, that appeal to different subsets of people. The goal of the program committee is to curate an a diverse and intellectually stimulating program for the conference. This is an inherently subjective task, and it’s not surprising that different committees would arrive at different conclusions. I do not know what’s the “optimal” amount of variance in this process, but I would have been quite worried if it was zero, since it would be a clear sign of groupthink. Lastly, I think this experiment actually points out to an important benefit of the conference system. Unlike journals, where the editorial board tends to stay constant for a long period, in conferences one gets a fresh draw of the committee every 6 months or a year.


by Boaz Barak at December 18, 2014 09:45 PM

/r/clojure

StackOverflow

How to diagnose or detect deadlocks in Java static initializers

(Whether using static initializers in Java is a good idea is out of scope for this question.)

I am encountering deadlocks in my Scala application, which I think are caused by interlocking static initializers in the compiled classes.

My question is how to detect and diagnose these deadlocks -- I have found that the normal JVM tools for deadlocks do not seem to work when static initializer blocks are involved.

Here is a simple example Java app which deadlocks in a static initializer:

public class StaticDeadlockExample implements Runnable
{
    static
    {
        Thread thread = new Thread(
                new StaticDeadlockExample(),
                "StaticDeadlockExample child thread");
        thread.start();
        try {
            thread.join();
        } catch (InterruptedException e) {
            e.printStackTrace();
        }
    }

    public static void main(String[] args)
    {
        System.out.println("in main");
    }

    public static void sayHello()
    {
        System.out.println("hello from thread " + Thread.currentThread().getName());
    }

    @Override
    public void run() {
        StaticDeadlockExample.sayHello();
    }
}

If you launch this app, it deadlocks. The stack trace at time of deadlock (from jstack) contains the following two deadlocked threads:

"StaticDeadlockExample child thread" prio=6 tid=0x000000006c86a000 nid=0x4f54 in Object.wait() [0x000000006d38f000]
   java.lang.Thread.State: RUNNABLE
    at StaticDeadlockExample.run(StaticDeadlockExample.java:37)
    at java.lang.Thread.run(Thread.java:619)

   Locked ownable synchronizers:
    - None

"main" prio=6 tid=0x00000000005db000 nid=0x2fbc in Object.wait() [0x000000000254e000]
   java.lang.Thread.State: WAITING (on object monitor)
    at java.lang.Object.wait(Native Method)
    - waiting on <0x000000004a6a7870> (a java.lang.Thread)
    at java.lang.Thread.join(Thread.java:1143)
    - locked <0x000000004a6a7870> (a java.lang.Thread)
    at java.lang.Thread.join(Thread.java:1196)
    at StaticDeadlockExample.<clinit>(StaticDeadlockExample.java:17)
    at java.lang.Class.forName0(Native Method)
    at java.lang.Class.forName(Class.java:169)
    at com.intellij.rt.execution.application.AppMain.main(AppMain.java:116)

   Locked ownable synchronizers:
    - None

My questions are as follows

  1. Why is the first thread marked as RUNNABLE, when it is in fact waiting on a lock? Can I detect the "real" state of this thread somehow?
  2. Why is neither thread marked as owning any (relevant) locks, when in fact one holds the static intializer lock and the other is waiting for it? Can I detect the static initializer lock ownership somehow?

by Rich at December 18, 2014 09:10 PM

/r/compsci

StackOverflow

Why is logback loading configurations in a different order and ignoring system properties (SBT)?

I've been trying to sort out my logging situation (How to properly manage logback configrations in development and production using SBT & Scala?), and I've run across a funny problem.

According to the logback documentation, logback checks for logback-test.xml before it checks for logback.xml.

I have the following files:

  • src/main/resources/logback.xml
  • src/test/resources/logback-test.xml

So I figured that when running sbt test, it would look to the logback-test.xml. This is true in intellij (which manages test execution itself), but does not seem to be true when running from the command line.

I renamed my logback.xml and turned on logback debugging, and here is the output. Clearly it's looking for the files in the reverse order:

14:58:21,203 |-INFO in ch.qos.logback.classic.LoggerContext[default] - Could NOT find resource [/home/rathboma/Projects/personal/beekeeper/src/main/resources/logback.xml]
14:58:21,206 |-INFO in ch.qos.logback.classic.LoggerContext[default] - Could NOT find resource [logback.groovy]
14:58:21,206 |-INFO in ch.qos.logback.classic.LoggerContext[default] - Found resource [logback-test.xml] at [file:/home/rathboma/Projects/personal/beekeeper/target/scala-2.10/test-classes/logback-test.xml]

I'm speculating that it's because the test resources are in the test-classes directory, but I have no idea how to properly fix this.

SECONDLY, supplying -Dlogback.configurationFile=config/logback-#{environment}.xml doesn't seem to do anything, it toally ignores it.

Any thoughts?

by Matthew Rathbone at December 18, 2014 09:05 PM

Fefe

Eigentlich wollten wir das ja schön auf dem 31c3 platzen ...

Eigentlich wollten wir das ja schön auf dem 31c3 platzen lassen, aber da hat der Herr Nohl wohl seinen Schwanz sein Ego nicht solange im Zaum halten können: Wie man UMTS-Verschlüsselung umgeht: Das Gruselige daran ist, dass das nicht ein "die haben halt schlechte Krypto genommen" ist, sondern das ist im Protokoll vorgesehen, dass man auf dem Weg an den Schlüssel kommt. Man meldet sich per SS7 und fragt nach dem Schlüssel. Gut, SS7-Zugang zu beschaffen ist nicht direkt ein Selbstläufer, das ist schon mit ein bisschen Aufwand verbunden.

In jedem Fall gibt es trotzdem auf dem 31c3 noch den Vortrag dazu, und einen anderen SS7-Vortrag, der noch in eine andere Richtung geht und dessen Vortragender noch nicht alles vorher verraten hat.

December 18, 2014 09:00 PM

Newt Ginrich zum Sony-Hack:it wasn't the hackers who ...

Newt Ginrich zum Sony-Hack:
it wasn't the hackers who won, it was the terrorists and almost certainly the North Korean dictatorship, this was an act of war
Newt Gingrich ist einer der Rechtsextremen bei den Republikanern. Der ist noch nie durch intelligente Beiträge aufgefallen. Aber das ist trotzdem bemerkenswert. Jetzt sind wir schon bei "Act of War". Ein Act of War würde laut Nato-Charter einen Bündnisfall gegen Nordkorea auslösen. Die Nato hatte dieses Jahr verkündet, dass sie sich auch für Cyberwar zuständig fühlt. Soweit ist die Rhetorik schon entgleist!

December 18, 2014 09:00 PM

DataTau

StackOverflow

SBT Publish only when version does not exist

So I have a job in my CI app that publishes to Nexus when a change pushed to develop on an app.

Is there a way to make ./sbt publish idempotent? Because occasionally we want to run the job again because of a temporary issue, and it'll error out with:

[16:31:24]java.io.IOException: destination file exists and overwrite == false
[16:31:24]  at org.apache.ivy.plugins.repository.url.URLRepository.put(URLRepository.java:75)
[16:31:24]  at org.apache.ivy.plugins.repository.AbstractRepository.put(AbstractRepository.java:130)
[16:31:24]  at sbt.ConvertResolver$ChecksumFriendlyURLResolver$class.put(ConvertResolver.scala:78)
[16:31:24]  at sbt.ConvertResolver$PluginCapableResolver$1.put(ConvertResolver.scala:103)
[16:31:24]  at org.apache.ivy.plugins.resolver.RepositoryResolver.publish(RepositoryResolver.java:216)

Because we've not bumped the version number. Right now I'm going with a hacky:

./sbt publish || true

So the job doesnt exit 1 and error in CI. Is there a better way?

by Peter Souter at December 18, 2014 08:57 PM

scala self type can it really be compiled when using `extends`?

I'm looking at scala types of types self types annotation

it says that new Service cannot be instantiated because of the usage of self type in favor of extends. But I tried the example also with extends and it still does not compile.

class TryMe {
  class ServiceInModule {
    def doTheThings() = "dfdf"
  }
  trait Module {
    lazy val serviceInModule = new ServiceInModule
  }

  trait Service extends Module {
    def doTheThings() = serviceInModule.doTheThings()
  }

  trait TestingModule extends Module {

  }

  new Service
}

Error:(22, 3) trait Service is abstract; cannot be instantiated new Service ^

Am I missing something? why does it claim that with extends it should compile? it does not compile...

by Jas at December 18, 2014 08:54 PM

CompsciOverflow

Mathematical model for a webpage layout?

Getting layout right (even if only a structure is considered) with HTML5/CSS3 is still more like an art or black magic.

On the other hand, there are other GUI systems (like wxWindows and Tcl/Tk) and some GUI research (like The Auckland Layout Model, ALM), which hint at the possibility of formalization for the layout managers (geometry managers).

Are there any comprehensible formal models for HTML5/CSS, which provide ultracompact (abstract) way to describe structure, "physics" and "geometry" of resizeable webpages, using language of blocks? Also html/css can be generated from it, which works more or less as described in standard browsers. Also, a model can be derived given HTML/CSS (browsers do it by their algorithms, so this seems to be theoretically possible).

By "ultracompact" and abstract it is understood: much more compact than HTML/CSS and also more domain-oriented, "speaking" the language of webpage's dynamics in response to resizing or changed content, that is, higher level than HTML/CSS constructs.

For an analogy, it is possible to write a program to make a textual search, based on some complex rules, but the same task can be performed by a much more compact regular expression. So, is there similar compact language for HTML/CSS layout?

by Roman Susi at December 18, 2014 08:53 PM

StackOverflow

Actors in Scala.net

I have recently completed some study of erlang, and was intrigued by scala for its feature set and the ease of interpolating with java (and possibly .net) applications. I am finally studying actors and was wondering if there is an actor mechanism that currently works in .net.

I have looked at the libararies that come down with sbaz and have found that there is a scala.Concurrent but no scala.actors.Actor. I tried to use the scala.Concurrent.Channel but was unable to use the ! to send messages.

I was just wondering if this is something that is currently available and if so how do you go about setting it up.

by Mike at December 18, 2014 08:49 PM

play framework throws StackOverflowError after I copied entire project to another folder

I have working play project executed by play 2.2.3. After I copied entire project into another folder it is start able. But when I do first localhost:9000 request from browser I have following response: Should I generate new project and add all my files to it or there is a way to fix cloned project. Thanks, Vlad.

scala.MatchError: java.lang.StackOverflowError (of class java.lang.StackOverflowError)
     play.PlayReloader$$anon$1$$anonfun$reload$2$$anonfun$apply$14.apply(PlayReloader.scala:298)
     play.PlayReloader$$anon$1$$anonfun$reload$2$$anonfun$apply$14.apply(PlayReloader.scala:298)
     scala.Option.map(Option.scala:145)
     play.PlayReloader$$anon$1$$anonfun$reload$2.apply(PlayReloader.scala:298)
     play.PlayReloader$$anon$1$$anonfun$reload$2.apply(PlayReloader.scala:296)
     scala.util.Either$LeftProjection.map(Either.scala:377)
     play.PlayReloader$$anon$1.reload(PlayReloader.scala:296)
     play.core.ReloadableApplication$$anonfun$get$1.apply(ApplicationProvider.scala:104)
     play.core.ReloadableApplication$$anonfun$get$1.apply(ApplicationProvider.scala:102)
     scala.concurrent.impl.Future$PromiseCompletingRunnable.liftedTree1$1(Future.scala:24)
     scala.concurrent.impl.Future$PromiseCompletingRunnable.run(Future.scala:24)
     scala.concurrent.forkjoin.ForkJoinTask$AdaptedRunnableAction.exec(ForkJoinTask.java:1361)
     scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
     scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
     scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
     scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)

by vlakin2003 at December 18, 2014 08:46 PM

TheoryOverflow

Bisection Width of a Mesh Topology

What is the bisection width of a q-dimensional mesh topology with one dimension having k nodes, where bisection width splits the network as evenly as possible into two sets (with a difference of at most one node)? For even k, it seems to be k^(q-1), but what is it for odd k?

by Simon Ayzman at December 18, 2014 08:40 PM

StackOverflow

Slick create a new session and keep it open

My scala application works with Oracle Database which has the limit of active sessions. I'm using AKKA actors for concurrent tasks with Oracle DB by Typesafe Slick.

This is the example of work for actors:

def marketPlaceDataRefresh[T](targetArea:String, customerId:String, wave:String) =    
  clientPool.withClient(targetArea.toUpperCase) {
    implicit session =>
      sql"""BEGIN MARKETPLACE_PKG.INIT_DATA_REFRESH($customerId,$wave,$wave,$wave); COMMIT; END;""".as[Int].firstOption
  }

Implicit session opening and closing every time when def marketPlaceDataRefresh called by AKKA actors.

I know only the basics of concurrency and Slick.

How can I create a single session and keep it open for all concurrent tasks?

by Intentio at December 18, 2014 08:40 PM

QuantOverflow

Finding Expression for Optimal Markowitz Weights

So there are two assets with return rates $r_1$ and $r_2$ which have identical variances and a correlation coefficient $p$. The risk free rate is $r_f$.

I need to find an expression for the optimal Markowitz weights for the two assets.

The books says that the answer is ($s_1 - p s_2$)/[($s_1-s_2$)*($1-p$)], but I'm not sure how this makes any sense as I don't know what the s's mean.

Thank you

by stochman at December 18, 2014 08:31 PM

StackOverflow

Update actor state only after all events are persisted

In the receive method of a persistent actor, I receive a bunch a events I want to persist, and only after all events are persisted, update again my state. How can I do that?

def receive: Receive = {
  ...
  case NewEvents(events) =>
    persist(events) { singleEvent =>
      // Update state using this single event
    }
    // After every events are persisted, do one more thing
}

Note that the persist() call is not blocking so I cannot put my code just after that.

by Dimitri at December 18, 2014 08:24 PM

UnixOverflow

Cannot create new zpool

I'm working on creating a ZFS zpool and I'm getting the following error:

# zpool create main_zfs raidz2 /dev/sdc /dev/sdd /dev/sde /dev/sdf /dev/sdg /dev/sdh /dev/sdi /dev/sdj /dev/sdk /dev/sdl /dev/sdm /dev/sdn /dev/sdo /dev/sdp /dev/sdq -f

the kernel failed to rescan the partition table: 16

cannot label 'sdc': try using parted(8) and then provide a specific slice: -1

I found another post on this subject here, but I can confirm that these disks have never been part of an MD array before.

Here's what I've tried so far:

  1. According to mount, none of these disks are currently mounted (they don't even have a file system anyway).

  2. I've used zpool labelclear /dev/sd[c-q] to erase any ZFS metadata from them.

  3. I've used dd to just zero the disks out.

by Cameron at December 18, 2014 08:06 PM

Fefe

Bei all den furchtbaren Enthüllungen über die NSA ...

Bei all den furchtbaren Enthüllungen über die NSA verliert man aus dem Auge, was für ein furchtbarer Laden die CIA ist. Und zwar nicht nur in Sachen Folter.
Das Dokument trägt den schlichten Titel "Beste Praktiken bei der Aufstandsbekämpfung", kommt aber schon in der Unterzeile zur Sache: "Wie man Operationen mit hochwertiger Zielauswahl zu einem effektiven Mittel der Aufstandsbekämpfung macht". Entstanden ist das Papier über das "High-Value Targeting" im Office of Transnational Issues (OTI), das sich auf der Website der CIA damit rühmt, es habe "einzigartige funktionale Sachkompetenz anzubieten, um bestehende und entstehende Bedrohungen für die Sicherheit der USA einschätzen zu können".
Das Dokument, um das es hier geht, ist das Meuchelmord-Manual der CIA, geleakt von Wikileaks.

December 18, 2014 08:00 PM

DataTau

StackOverflow

Light Table auto close Parenthesis on Windows 7 International Spanish Keyboard

I'm using a Internation Spanish Keyboard with deadsunkeys the following code works for "'(){} but not for [], any clues?.

To test I used:

[:editor.keys.normal "ctrl-`" :tabs.next]

and nothing happened.

Light Table 0.7.2
user.keymaps

;; Auto Close
[:editor.keys.normal "\"" (:editor.repeat-pair "\"")]
[:editor.keys.normal "'" (:editor.repeat-pair "'")]
[:editor.keys.normal "(" (:editor.open-pair "(")]
[:editor.keys.normal ")" (:editor.close-pair ")")]

[:editor.keys.normal "ctrl-alt-[" (:editor.open-pair "[")]
[:editor.keys.normal "ctrl-alt-]" (:editor.close-pair "]")]

[:editor.keys.normal "ctrl-alt-'" (:editor.open-pair "{")]
[:editor.keys.normal "ctrl-'" (:editor.open-pair "{")]
[:editor.keys.normal "ctrl-alt-ç" (:editor.close-pair "}")]
[:editor.keys.normal "ctrl-ç" (:editor.close-pair "}")]

Thanks for your time.

by FreeFog at December 18, 2014 07:45 PM

How to pattern match different ways depending on a particular input?

I have a function:

closeTo61 :: Tactic
closeTo61 s P2 h d e b 
    | cs + ps < 53 = 0
    | cs + ps == 61 = 100 -- Best case
    | numWaysToScoreN (61 - (cs + ps)) h b == 0 = 0 -- Worst case
    | numWaysToScoreN (61 - (cs + ps)) h b == 1 = 50 -- Strong case
    | numWaysToScoreN (61 - (cs + ps)) h b == 2 = 70 -- Very Strong case
    | numWaysToScoreN (61 - (cs + ps)) h b >= 3 = 90 -- Extremely Strong case
        where 
            ps = scoreDom d e b
            (_,cs) = s

closeTo61 s P1 h d e b 
    | cs + ps < 53 = 0
    | cs + ps == 61 = 100
    | numWaysToScoreN (61 - (cs + ps)) h b == 0 = 0
    | numWaysToScoreN (61 - (cs + ps)) h b == 1 = 50
    | numWaysToScoreN (61 - (cs + ps)) h b == 2 = 70
    | numWaysToScoreN (61 - (cs + ps)) h b >= 3 = 90
        where 
            ps = scoreDom d e b
            (cs,_) = s

The only reason I've done this with two bindings for each possible input of the second argument is because in the where block the cs is pattern matched differently depending on this input.

Is there a way to do this using only one binding and checking the second input inside the where block in order to use the correct pattern?

by Charles Del Lar at December 18, 2014 07:39 PM

Understanding use of "this: SomeClassOrTrait =>" in Scala [duplicate]

This question already has an answer here:

What is "this: Core =>" in CoreActors below.

import akka.actor.ActorSystem
import com.demo.service.DemoService
import com.typesafe.config.ConfigFactory

trait Core {
  implicit def system: ActorSystem
}

trait BootedCore extends Core {
  implicit lazy val system = ActorSystem("demo-microservice-system")

  sys.addShutdownHook(system.shutdown())
}

trait ConfigHolder {
  val config = ConfigFactory.load()
}

trait CoreActors extends ConfigHolder {
  this: Core =>

  val demoService = system.actorOf(
    DemoService.props("identity"), "DemoService")

  val services: Services = Map(
    "demoService" -> demoService
  )
}

by Srini K at December 18, 2014 07:27 PM

How to run compojure rest server?

I've created a rest server with compojure and ring.

I can run it with 'lein ring server'. I can build it with 'lein uberjar'. But how to run this jar like java -jar my.jar ...?

by Curiosity at December 18, 2014 07:27 PM

/r/emacs

Undeadly

Michael W. Lucas' Sudo Talk Online

Michael W. Lucas, author of Absolute OpenBSD, SSH Mastery, and Sudo Mastery (among others!) has given a talk, titled "Sudo: You're Doing it Wrong", now online:

It runs just over an hour, so make sure you bring a snack!

December 18, 2014 07:09 PM

Lobsters

StackOverflow

how to downgrade proguard version in android studio gradle?

I am trying to build an android app using scala and android studio. The compile fails at proguard with an exception:

Error:java.lang.ArrayIndexOutOfBoundsException: 4
    at proguard.classfile.editor.InterfaceDeleter.visitSignatureAttribute(InterfaceDeleter.java:162)
    at proguard.classfile.attribute.SignatureAttribute.accept(SignatureAttribute.java:97)

I found at another place (http://sourceforge.net/p/proguard/bugs/549/) that this issue is caused by a bug in scala, but that it only occurs in proguard 5.1 and not in proguard 5.0.

Now my question is: how can I setup android studio so that it will use proguard 5.0?

by user114676 at December 18, 2014 07:07 PM

Matt Might

Equational derivations of the Y combinator and Church encodings in Python

I love the Y combinator and Church encodings.

Every time I explain them, I feel like I’m using sorcery.

I’ve written posts on memoizing recursive functions with the Y combinator in JavaScript and on the Church encodings in Scheme and in JavaScript.

When I spoke at Hacker School, I used Python as the setting in which to derive Church encodings and the Y combinator for the first time.

In the process, Python seemed to hit a sweet spot for the explanation: it’s a popular language, and the syntax for lambda is concise and close to the original mathematics.

I’m distilling the technical parts of that lecture into this post, and in contrast to prior posts, I’m taking a purely equational reasoning route to Church encodings and the Y combinator – all within Python.

In the end, we’ll have constructed a programming language out of the lambda calculus, and we’ll arrive at the factorial of 5 in the lambda calculus, as embedded in Python:

(((lambda f: (((f)((lambda f: ((lambda z: (((f)(((f)(((f)(((f)(((f)
(z)))))))))))))))))))((((((lambda y: ((lambda F: (((F)((lambda x:
(((((((y)(y)))(F)))(x)))))))))))((lambda y: ((lambda F: (((F)((lambda x:
(((((((y)(y)))(F)))(x)))))))))))))((lambda f: ((lambda n: ((((((((((((
lambda n: (((((n)((lambda _: ((lambda t: ((lambda f: (((f)((lambda void:
(void)))))))))))))((lambda t: ((lambda f: (((t)((lambda void: (void)))))
))))))))((((((lambda n: ((lambda m: (((((m)((lambda n: ((lambda f:
((lambda z: (((((((n) ((lambda g: ((lambda h: (((h)(((g)(f)))))))))))
((lambda u: (z)))))((lambda u: (u)))))))))))))(n))))))) (n)))((lambda f:
((lambda z: (z)))))))))((lambda _: ((((lambda n: (((((n) ((lambda _: ((
lambda t: ((lambda f: (((f)((lambda void: (void))))))))))))) ((lambda t:
((lambda f: (((t)((lambda void: (void))))))))))))) ((((((lambda n: 
((lambda m: (((((m)((lambda n: ((lambda f: ((lambda z: (((((((n) ((lambda
g: ((lambda h: (((h)(((g)(f)))))))))))((lambda u: (z)))))((lambda u:
(u)))))))))))))(n)))))))((lambda f: ((lambda z: (z)))))))(n)))))))))
((lambda _: ((lambda t: ((lambda f: (((f)((lambda void: (void)))))))))))
))((lambda _: ((lambda f: ((lambda z: (((f)(z)))))))))))((lambda _: (((
(((lambda n: ((lambda m: ((lambda f: ((lambda z: (((((m)(((n)(f)))))(z)
))))))))))(n)))(((f) ((((((lambda n: ((lambda m: (((((m)((lambda n:
((lambda f: ((lambda z: (((((((n) ((lambda g: ((lambda h: (((h)(((g)(f)
))))))))))((lambda u: (z)))))((lambda u: (u)))))))))))))(n)))))))(n)))
((lambda f: ((lambda z: (((f) (z))))))))))))))))))))))))(lambda x:x+1)(0)

Run the above in your Python interpreter. It’s equal to 120.

As a bonus, this post is a proof that the indentation-sensitive constructs in Python are strictly optional.

Read below for more.

Click here to read the rest of the article

December 18, 2014 07:01 PM

Portland Pattern Repository

CompsciOverflow

A reference for pseudocode for Monge-Elkan algorithm?

Does anyone have a good reference to pseudocode for Monge-Elkan string comparison algorithm or a single class Java/C implementation?

I have access to original two papers, but they do not show the pseudocode of the actual algorithm. Also, I have seen some implementations in Java (preference), but they are part of the larger package and with a complex inheritance and composition hierarchies.

I was wondering if someone can point me to a pseudocode for the algorithm, so that I could implement it in a JavaScript.

by Edmon at December 18, 2014 06:57 PM

What is MongeElkan distance between words?

i used CosineSimilarity, EuclideanSimilarity, LevenshteinSimilarity and known about how it calculates the distance between two words. Can anyone explain how MongeElkan distance calculates the similarity.

by Exbury at December 18, 2014 06:57 PM

StackOverflow

Passing elements of multi-dimensional array in Ansible using Extra-vars

I have a harness to build VMs using Packer that in turn calls Ansible (in local mode) to do the heavy lifting.
I'd like to be able to parameters to Packer (got that), which is passes to Ansible as extra vars.

I can pass an external variables files and also a simple variable such as the example below.

ansible-playbook -v -c local something.yml --extra-vars "deploy_loc=custom"

Thats okay, but I really need to pass more complex array of variables, such as the examples below.
I've tried a number of formatting such as the one below and usually get some kind of delimiter error.

ansible-playbook -v -c local something.yml --extra-vars 'deploy_loc=custom deploy_scen: [custom][ip=1.2.34]}'

Role variable file

# Which location
deploy_loc: 'external-dhcp'

# location defaults
deploy_scen:
  custom:
     ipv4: yes
     net_type: dhcp
     ip: '1.1.1.1'
     prefix: '24'
     gw: '1.1.1.1.254'
     host: 'custom'
     domain: 'domain.com'
     dns1: '1.1.1.2'
  standard-eng:
     ipv4: yes
     net_type: none
     ip: '12.12.12.5'
     prefix: '24'
  external-dhcp:
     ipv4: yes
     net_type: dhcp

by stephen mayo at December 18, 2014 06:43 PM

Unit Testing Local Functions (letfn) in Clojure?

I spent a couple of years doing Scheme "back in the day" and am now learning Clojure. One of the "best practices" in Scheme was to define helper functions within the parent function thus limiting their visibility from "outside." Of course back then TDD wasn't done (or known!) so testing such functions wasn't an issue.

I'm still tempted to structure Clojure functions this way; i.e., using letfn to bind helper functions within the main function. Of course testing such "local" functions is problematic. I realize I can define "private" functions, but this scopes the visibility to the namespace which helps, but is not as fine grained. If you come upon a letfn within another function it's pretty clear that the function is not available for general use.

So, my question is, can one test such local functions and if so, how? If not, then is there some convention to aid in code reading so that it's clear that a function has only one caller?

TIA, Bill

by Bill Cohagan at December 18, 2014 06:43 PM

TheoryOverflow

What are the major research issues in distributed transactions?

Background: Transaction processing has been a traditional research topic in database theory. Nowadays distributed transactions are popularized by the large-scale distributed storage systems which typically involve data partition (also called shard) and data replication.

What are the major research issues in distributed transactions?

Are there well-known theories and solutions which need (theoretical) improvement?

Any references are appreciated.

by hengxin at December 18, 2014 06:16 PM

Logics for timed resource control

I'm studying proof theory and I've seen that linear logic can be used as a way to control resource usage, since by the propositions-as-types it is equivalent to the linear lambda calculus.

Is there a logic that allows resource control (like linear logic) and can express properties that vary over a notion of time like linear time temporal logics (LTL)?

Any reference or point to literature is highly appreciated.

by Rodrigo Ribeiro at December 18, 2014 06:09 PM

Recommendations for References on undecidability of First Order Logic

I am currently reading Computability and Logic by Boolos Burgess Geoffrey for the proof on "undecidability of first order logic". however, I find the notations a bit confusing. Can anyone recommend any resource website link/ video lecture or a book perhaps which will help me understand the proof of undecidability of first order logic? I am a CS student so, I do not want a completely mathematical /philosophical proof. I came across plenty of those on the web.

by user59288 at December 18, 2014 06:06 PM

CompsciOverflow

Why do heuristic functions only approximate the real value of the cost?

As stated in the title I'm wondering why do heuristic functions only approximate the real value of the cost? I understand it can never overestimate, but can it ensure the cost is accurate?

by orange at December 18, 2014 06:02 PM

Portland Pattern Repository

CompsciOverflow

Determining the minimum number of edges to add in order to be 3-connected

A graph $G$ is said to be $3$-connected if there exists no cutting-pairs in our graph. As far as I know, it is possible to determine if a simple graph is $3$-connected in $O(n)$ time (example: http://www2.tu-ilmenau.de/combinatorial-optimization/Schmidt2012b.pdf), but I would find it useful to efficiently determine which edges to add in order to make our graph $3$-connected if it isn't already (ideally, the minimum number of edges if this can be done efficiently). Is anyone aware of such an algorithm? If so, I would appreciate a reference or two.

by Zachary Frenette at December 18, 2014 05:56 PM

/r/netsec

/r/emacs

Planet Emacsen

Mickey Petersen: Announcing my new Mastering Emacs book

After six months (and several years of procrastinating and chiselling away at this) I’m pleased to announce that the Mastering Emacs book will be out Soon(TM).

image

Learn more about the book

It’s fair to say there’s a need for another book on Emacs.

I’ve spent four years writing about Emacs and I’ve realized what Emacs needs more than anything is a book that takes you from knowing something (or nothing) about Emacs to a point where you are comfortable. Those of you reading this who know Emacs well should know what I’m talking about: that moment of clarity when you finally understand how Emacs works.

Unfortunately Emacs’s adoption is marred by this unnecessary complexity; the confusion over the keys, the byzantine terminology, the often paradoxical functionality of common operations. They’re not hard concepts to learn (no really) – but the fragments of knowledge that you need to learn to understand this you won’t find in one single source.

The book will hopefully set out to correct that. I have spent the first 100 (out of what I plan will be a total tally of 200 pages) with the aim of teaching the reader why Emacs is the way it is – a tall order for a text editor older than a lot of its users – and what you need to know, here and now, to overcome these difficulties.

The remainder of the book is hands-on, practical Emacs Stuff: how to move around; how to edit text; how to combine all these commands and tools into what I call “workflow.” For the next-hardest thing (after you’re comfortable with Emacs) is knowing how to use the tools.

Check out the book landing page and sign up to be notified when it’s out!

by Mickey Petersen at December 18, 2014 05:40 PM

/r/netsec

CompsciOverflow

Determine if the following family of hash functions is universal [on hold]

Let $H = \{h_1,h_2,h_3\}$ be the family of hash functions defined below, each mapping $\{a,b,c,d,e\}$ to $\{0,1,2\}$. Is $H$ universal?

enter image description here

A family of hash functions is universal if $\forall x,y\in U, x\not = y:\Pr \limits_{h\in H}[h(x)=h(y)] \le \frac{1}{m}$, where $m$ is the number of elements in the codomain ($3$ in this case) and $U$ is the universe (i.e. $\{a,b,c,d,e\}$).

by Kelsey at December 18, 2014 05:34 PM

Planet FreeBSD

Upgrade to PC-BSD 10.1 is Now Live!

Hey everyone!

Kris has made the update to 10.1 live on the servers. To upgrade to 10.1 you can simply open the update GUI and start the update from there. You will notice the update takes a little longer to complete, but the good news is it runs in the background and there are no unexpected resets :).

If you are on the EDGE repo you most likely have the newest broken version of pkg which will need to be fixed before upgrading. To fix pkg:

% pkg install –f pkg

After that you should be in business. Please send us your feedback and / or any questions!

by Josh Smith at December 18, 2014 05:27 PM

StackOverflow

Overriding parameterized methods in Scala

Note: I apologise for altering this question, previously I wan't able to express the exact issue I am facing.

Let's say we have an abstract parameterized class abs,

abstract class abs[T] {
    def getA(): T
}

class ABS[Int] extends abs[T] {
    override def getA(): Int = 4
}

gives the following incomprehensible error:

<console>:28: error: type mismatch;
 found   : scala.Int(4)
 required: Int
       override def getA(): Int = 4
                              ^

I want to override the method getA.

Also an explanation or some helpful link would be really appreciated.

by Kamal Banga at December 18, 2014 05:21 PM

/r/netsec

Fefe

Eine per Mail reingekommene Theorie will ich hier mal ...

Eine per Mail reingekommene Theorie will ich hier mal weitergeben.

Die Theorie ist, dass die ganzen Rechtsextremen von SPD und CDU sich seit Jahren selbst in die Tasche lügen, sie seien liberal, menschenfreundlich und gut. Die glauben sich das alle gegenseitig, und daher kommt ihr Eindruck, sie seien von linken Zecken umgeben und die Linken hätten sooo viel Einfluss in der Politik und da müsse man mal gegensteuern.

December 18, 2014 05:01 PM

StackOverflow

ZMQ message_t and adding message to a vector

This is probably me doing something stupid so bear with me...

I can receive a message from a port using zmq. In this example, the meta data (first 16 bits) is outputted correctly:

zmq::message_t img;
imageSocket->recv(&img,ZMQ_NOBLOCK);
if((int)img.size() > 0){
  localQueue->addToQueue((unsigned char*) img.data());

  //For testing:
  //Output meta data so it's clear the image is received
  unsigned char* data = (unsigned char*) img.data();
  char temp[16];

  int j = 0;
  for(j=0;j<16;j++){
    temp[j] = data[j];
  }

  cout<<"Received image meta: "<< temp << endl;
}

The addToQueue method is (imgSize is already set and is correct):

void rxImageQueue::addToQueue(unsigned char* data){
  cout<<"pushing back1 of size: "<<imgSize<<endl;
  queue.push_back( boost::shared_ptr<rxImage> (new rxImage(data, imgSize)) );
}

The queue object is stored in this function and is:

std::vector<boost::shared_ptr<rxImage> > queue;

Finally, rXImage is a class containing:

unsigned char *data;

And the constructor is:

rxImage::rxImage(unsigned char* image, int imgSize)
{
  //data = image
  data = new unsigned char[imgSize];
  std::memcpy(data, image, memorySize);
}

When I access a message stored within this rxImage class (this example is just the 0th element of the vector), the output is nothing, there are no characters:

string metaLast = "";
for(int i =0;i<16;i++){
  metaLast += (*path_to_queue*->localQueue->queue[0]->data[i]);
}
cout<<"last image: " <<metaLast <<endl;

This must mean that somewhere along the chain, the pointer is incorrectly passed to the next function as the memory is not set, but the message_t is still in scope so the data at the pointer location must still be valid. What am I doing wrong here?

by user2290362 at December 18, 2014 05:00 PM

Lobsters

StackOverflow

Scala: please explain the generics involved

Could some one please explain the generics involved in the following code from play framework

class AuthenticatedRequest[A, U](val user: U, request: Request[A]) extends WrappedRequest[A](request)

class AuthenticatedBuilder[U](userinfo: RequestHeader => Option[U],
        onUnauthorized: RequestHeader => Result = _ => Unauthorized(views.html.defaultpages.unauthorized()))
          extends ActionBuilder[({ type R[A] = AuthenticatedRequest[A, U] })#R]

The ActionBuilder actualy has type R[A], it is getting reassigned, this much I understand. please explain the intricacies of the syntax

by Venki at December 18, 2014 05:00 PM

Is there anything wrong with asInstanceOf in this example using generics?

Consider this (somewhat contrived) example:

abstract class Obj[A, B] {
    def id: Long
    def parent: B
}

abstract class TopLevel[A] extends Obj[A, A] {
    def parent: A = this.asInstanceOf[A] // How terrible is this?
}

abstract class AbsChild[A, B] extends Obj[A, B] {
    def parent: B
}

case class Top(id: Long) extends TopLevel[Top]

case class Child(id: Long, parent: Top) extends AbsChild[Child, Top]

To paint a better picture, imagine AbsChild as some kind of directory on a file system, and TopLevel as the physical drive that an AbsChild belongs to. So parent doesn't actually refer to the direct parent of the object (like the directory that contains it), but rather a reference to the top level object in the tree.

In some applications, I'm going to be dealing with a List[Obj[A, B]], where it isn't immediately known what Obj is. In this case, it would be nice for even a TopLevel to have a parent, which should just return a reference to itself. And herein lies my question.

Defining def parent: A = this for TopLevel doesn't work:

<console>:14: error: type mismatch;
 found   : TopLevel.this.type (with underlying type TopLevel[A])
 required: A

But def parent: A = this.asInstanceOf[A] does, and seems to function correctly in practice.

scala> val top = Top(1)
top: Top = Top(1)

scala> val child = Child(1, top)
child: Child = Child(1,Top(1))

scala> top.parent
res0: Top = Top(1)

scala> child.parent
res1: Top = Top(1)

But is this really okay? Using asInstanceOf[A] feels incredibly dirty, and leaves me wondering if it will fail somehow with a ClassCastException.

by m-z at December 18, 2014 04:54 PM

/r/emacs

QuantOverflow

Various ways to choose bonds for a butterfly strategy?

What are the various ways to choose bonds for a butterfly strategy? For eg., I already know the most common one i.e., choosing short and long term for the wings (barbell) and the medium term for the body (Bullet).

by Shashank at December 18, 2014 04:54 PM

/r/emacs

CompsciOverflow

Hamming and BCH codes

Why Hamming codes is the best 1-error-correcting codes? I need references.I know that hamming codes is the best 1-error-correcting codes but I want know that why is it?

by ahmad mirzaei at December 18, 2014 04:46 PM

Overlap between fields in CS

I hope this isn't too meta.

I have finally had some serious graduate-level exposure to CS Theory and loved it. I really enjoyed complexity theory (time and space complexity, the different classes, reductions to prove NP-Completeness), and algorithm analysis. I am still very interested in Operating Systems, software engineering, and network/information security. So my question is: What would some starting places to look into if I want to find direct overlaps between CS Theory (algorithm design and analysis, complexity theory, information theory, etc) and, OS, or software engineering? I guess I am looking for areas that might have project possibilities that will test and expand my knowledge in both theory and either OS or SE.

For Security the best one I could think of is theory of cryptography, but I am kind of at a loss when it comes to the other two.

Thanks in advance!

by bitterman at December 18, 2014 04:38 PM

TheoryOverflow

Pseudorandom generator for finite automata

Let $d$ be a constant. How can we provably construct a pseudorandom generator that fools $d$-state finite automata?

Here, a $d$-state finite automata has $d$ nodes, a start node, a set of nodes representing accept states, and two directed edges labeled 0, 1 coming out of each node. It changes state in the natural way as it reads input. Given an $\epsilon$, find $f:\{0,1\}^{k}\to \{0,1\}^n$ such that for every $d$-state finite automaton computing some function $A$,

$$|\mathbb P_{x\sim U_{k}}(A(f(x))=1)-\mathbb P_{x\sim U_n}(A(x)=1)|< \epsilon.$$

Here $U_k$ denotes the uniform distribution on $k$ variables, and we want $k$ to be as small as possible (e.g. $\log n$). I'm thinking of $d$ being on the order of $n$, though we can also ask the question more generally (ex. would the number of bits required grow with $n$?).

Some background

Construction of pseudorandom generators is important in derandomization, but the general problem (PRG's for polynomial-time algorithms) has so far proved too difficult. There has however been progress on PRG's for bounded-space computation. For example this recent paper (http://homes.cs.washington.edu/~anuprao/pubs/spaceFeb27.pdf) gives a bound of approximately $\log n\log d$ for regular read-once branching programs. The question with general read-once branching programs is still open (with $k=\log n$), so I'm wondering if the answer for this simplification is known. (A finite automaton is like a read-once branching program where every layer is the same.)

by Holden Lee at December 18, 2014 04:34 PM

StackOverflow

how to get a set of elements in a list based on element index in scala

I have one list and another list contains the index I am interested in. e.g

val a=List("a","b","c","d")
val b=List(2,3)

then I need to return a list whose value is List("b","c"), since List(2,3) said I like to take the 2nd and 3rd element from element a. How to do that?

by Daniel Wu at December 18, 2014 04:31 PM

clojureql query with sub-select

Given a table with the following structure:

CREATE TABLE transitions (id INT, ordering INT, item_id INT, action_id INT)

Is it possible to get ClojureQL to generate a query like this one:

SELECT a.item_id, a.action_id
  FROM transitions a
 WHERE a.ordering = (SELECT MAX(b.ordering)
                       FROM transitions b
                      WHERE b.item_id = a.item_id
                    )

This will return many rows, one for each item, indicating that item's latest transition.

I've been looking at using join, but worry I might hit this bug: https://github.com/LauJensen/clojureql/issues/114

by Tom at December 18, 2014 04:26 PM

Planet Theory

The negative impacts of random conference decisions

The NIPS experiment is making waves.  If you are unaware, for the last NIPS conference, the PC was broken into two independent halves A and B.  A random selection of the submissions were assigned to both committees.  The result: 57% of the papers that were accepted by committee A were rejected by committee B (and […]

The post The negative impacts of random conference decisions appeared first on Glencora Borradaile.

by Glencora Borradaile at December 18, 2014 04:23 PM

/r/compsci

Best C++ IDE for beginner windows user?

I'm taking a second-year C++ course next year at U of Michigan, but my only C++ experience so far is through VS 2008 many years ago. I have both a windows desktop and laptop, so I would like to use Visual Studio, but I've heard that VS has both its own nuances and flexibility that the autograder might not be so nice about. Any recommendations for something I could set up so I can start brushing up on C++ this break?

Edit: Thanks for all the replies! I'll take some of these ideas up with professors/friends and see what they think.

submitted by jussnf
[link] [32 comments]

December 18, 2014 04:23 PM

QuantOverflow

Kelly Capital Growth Investment Strategy (Example in R)

In the paper Response to Paul A Samuelson letters and papers onthe Kelly Capital Growth Investment Strategy pages 5 and 6 Dr William T Ziemba, gives a praticle example on Kelly Growth.

I’m trying to replicate the simulation explained there on R :

Step 1 : Create the Table as da Data.Frame

Win.Prob <- c(0.57,0.38,0.285,0.228,0.19)
Odds <- c("1-1","2-1","3-1","4-1","5-1")
Implied.Odds <-c(0.5,0.333,0.25,0.2,0.167)
Edge <- c(0.07,0.0467,0.035,0.028,0.0233)
Advantage <- c(0.14,0.14,0.14,0.14,0.14)
Opt.Kelly <- c(0.14,0.07,0.0467,0.035,0.028)
Prob.Chose.Bet <- c(0.1,0.3,0.3,0.2,0.1)
Cum.Prob.Bet <- c(0.1,0.4,0.7,0.9,1)
Kelly.Example <- data.frame(Win.Prob,Odds,Implied.Odds,Edge,Advantage,Opt.Kelly,Prob.Chose.Bet,Cum.Prob.Bet)
remove(Win.Prob,Odds,Implied.Odds,Edge,Advantage,Opt.Kelly,Prob.Chose.Bet,Cum.Prob.Bet)

Step 2 : Create the function that replicates the simulation

# Initiate the function that takes 3 variables (Initial Wealth, Decision Points, Number of Simulations)

kelly.simulation <- function(InitialWealth,SimulationNumber,SimulationSteps,KellyFraction) {

  #Initiate a Matrix that generates SimulationSteps*SimulationNumber random numbers and Attribute to the Bet choice
  simu_bets <- matrix(sample.int(5, size = SimulationSteps*SimulationNumber, replace = TRUE, prob = c(.1,.3,.3,.2,.1)),nrow=SimulationSteps,ncol=SimulationNumber)

  #Take the chosen bet in simu_bets and create a new matrix of Optimal Kelly Bets based on the table in Kelly.Example
  simu_kellybets <- ifelse(simu_bets == 1,Kelly.Example$Opt.Kelly[1],
                           ifelse(simu_bets == 2,Kelly.Example$Opt.Kelly[2],
                              ifelse(simu_bets == 3,Kelly.Example$Opt.Kelly[3],
                                         ifelse(simu_bets == 4,Kelly.Example$Opt.Kelly[4],Kelly.Example$Opt.Kelly[5]))))

  #Take the chosen bet in simu_bets and create a new matrix of Winning Probability based on the table in Kelly.Example
   simu_prob <- ifelse(simu_bets == 1,Kelly.Example$Win.Prob[1],
                      ifelse(simu_bets == 2,Kelly.Example$Win.Prob[2],
                         ifelse(simu_bets == 3,Kelly.Example$Win.Prob[3],
                                    ifelse(simu_bets == 4,Kelly.Example$Win.Prob[4],Kelly.Example$Win.Prob[5]))))

  #Generate a new matrix of random number and compare to the prob of winning 1 means you won the bet 0 means you lost
  simu_rnd <- matrix(runif(SimulationSteps*SimulationNumber,0,1),nrow=SimulationSteps,ncol=SimulationNumber)
  simu_results <- ifelse(simu_prob>=simu_rnd,1,0)

  #Generate a new matrix simu_results * simu_bets and creat the wealth simulation over each timestep
  bet_combined <- simu_results * simu_bets
  bet_combined[bet_combined==0] <- -1
  multiplier <- 1 + simu_kellybets * bet_combined*KellyFraction
  Wealth_Matrix <- apply(rbind(InitialWealth, multiplier), 2, cumprod)
  row.names(Wealth_Matrix) <- NULL 

  #return the variation of wealth over each step for the defined number of simulations (Rows = Each Bet Decision Point / Column = Each simulation)
  return(Wealth_Matrix)
}

Step 3 : Run the Simulation and Attribute the Resulting Matrix to a Variable called kelly.sim with 700 steps and 1000 simulations and Fraction = 1

 kelly.sim <- kelly.simulation(InitialWealth=1000,SimulationNumber=1000,SimulationSteps=700,KellyFraction=1)

Step 4 : Check the results of the last row of the simulations (in the example row number 701)

max(kelly.sim[701,])
 [1] 47800703
mean(kelly.sim[701,])
 [1] 270680.9
min(kelly.sim[701,])
 [1] 3.377048

In your oppinion these code replicates the simulation described in the paper ?

by RiskTech at December 18, 2014 04:18 PM

/r/netsec

QuantOverflow

Ibrokers: reqMktData extremely slow when adding tickers

I am trying to snap prices in R for the latest price for a list of stocks (around 150). When I snap them for 2 stocks, it's almost instantaneous:

  tickers<- c("YHOO","AAPL")
  library("IBrokers")
      t_start<-Sys.time()
  tws <- twsConnect()
  test<-reqMktData(tws, lapply(tickers, twsSTK), tickGenerics="", snapshot=T)
  twsDisconnect(tws)
      t_end<-Sys.time()
      t_end-t_start

However, when I start adding more records into the tickers vector, it starts getting incredibly slow. For example:

tickers<-c("YHOO","AAPL","COP","PEP","XOM","ORCL","SPG","EQR","CVX","JPM","AFL","GIS","VZ","KMB","WFC","ROST","MMC")

This becomes excruciatingly slow.

I cannot figure out why it's so slow based on exchange, or size of company, as these are all large cap, highly liquid, blue chip stocks.

Is anyone familiar with why this is so slow?

Thank you very much.

by Trexion Kameha at December 18, 2014 04:07 PM

Fefe

Aha, so wird ein Schuh draus! "USA wollen Kubanern ...

Aha, so wird ein Schuh draus! "USA wollen Kubanern ins Internet helfen". Im Moment geht Kuba über eine Leitung nach Venezuela ins Internet. Beim Abhören dieser Leitung hat die NSA offenbar Reibungsverluste, die sie jetzt loswerden möchte. Und wie soll man ohne Botschaft in Kuba an den inner-kubanischen Internetverkehr drankommen? Das geht so nicht weiter!

December 18, 2014 04:01 PM

Neue EU-Sanktionen: Kreuzfahrtschiffe dürfen nicht ...

Neue EU-Sanktionen: Kreuzfahrtschiffe dürfen nicht mehr in der Krim anlegen. Die anderen Sanktionen sind eher lächerlich. Der Export von Gütern in den Bereichen Energie, Öl- und Gasförderung, Transport und Telekommunikation sind jetzt auch verboten. Wen soll das bitte treffen? Heuschrecken-Hedgefonds?
Insgesamt wurden wegen des Ukraine-Konflikts mittlerweile 132 Ukrainer und Russen mit Einreise- und Vermögenssperren belegt sowie die Guthaben von 28 Organisationen eingefroren.
Man kann gar nicht so viel fressen, wie man kotzen möchte.

December 18, 2014 04:01 PM

StackOverflow

Executing a function with a timeout

What would be an idiomatic way of executing a function within a time limit? Something like,

(with-timeout 5000
 (do-somthing))

Unless do-something returns within 5000 throw an exception or return nil.

EDIT: before someone points it out there is,

clojure (with-timeout ... macro)

but with that the future keeps executing that does not work in my case.

by Hamza Yerlikaya at December 18, 2014 03:57 PM

Java takes a long time before running anything if network connection times out

I wonder why, to run any program, Java needs to use the network. As my connection to the college wireless network really sucks, I get disconnected all the time, but without any notice from the access point. Thus, I cannot access the network anymore but the connection is still seen as up by the operating system. (In my case, I'm using netctl on Arch Linux.)

The result is that Java will take 20 sec waiting for something before running any code from the main() method.

This problem doesn't appear if the connection is down (from the point of view of network utilities).

Do you know why Java does that, and how to prevent it?

Edit, as it doesn't seem clear:

How to reproduce the bug :

  1. Use a broken gateway, such as 240.0.0.1
  2. Write a Java program with an empty main() method.
  3. Compile (javac)
  4. Run the program

The last step takes 20+ seconds to complete.

What I'd like you to explain is not why I'm having network issues but why Java is affected?

by tiktak at December 18, 2014 03:55 PM

QuantOverflow

What is delta neutral

Does delta neutral portfolio mean you add up deltas of all positions and the sum should be zero? Is this true? Also, in a FX portfolio consisting of FX calls puts and Fwds, if FWD delta is given for each how do you make the portfolio delta neutral? Do you just add up the fwd deltas for all and depending on the sum, buy or sell a FWD to bring the sum of delta to zero? Is this the right approach?

by user13524 at December 18, 2014 03:54 PM

/r/emacs

StackOverflow

How to process multi line input records in Spark

I have each record spread across multiple lines in the input file(Very huge file).

Ex:

Id:   2
ASIN: 0738700123
  title: Test tile for this product
  group: Book
  salesrank: 168501
  similar: 5  0738700811  1567184912  1567182813  0738700514  0738700915
  categories: 2
   |Books[283155]|Subjects[1000]|Religion & Spirituality[22]|Earth-Based Religions[12472]|Wicca[12484]
   |Books[283155]|Subjects[1000]|Religion & Spirituality[22]|Earth-Based Religions[12472]|Witchcraft[12486]
  reviews: total: 12  downloaded: 12  avg rating: 4.5
    2001-12-16  cutomer: A11NCO6YTE4BTJ  rating: 5  votes:   5  helpful:   4
    2002-1-7  cutomer:  A9CQ3PLRNIR83  rating: 4  votes:   5  helpful:   5

How to identify and process each multi line record in spark?

by Vijay at December 18, 2014 03:50 PM

Using Scalaz Stream for parsing task (replacing Scalaz Iteratees)

Introduction

I use Scalaz 7's iteratees in a number of projects, primarily for processing large-ish files. I'd like to start switching to Scalaz streams, which are designed to replace the iteratee package (which frankly is missing a lot of pieces and is kind of a pain to use).

Streams are based on machines (another variation on the iteratee idea), which have also been implemented in Haskell. I've used the Haskell machines library a bit, but the relationship between machines and streams isn't completely obvious (to me, at least), and the documentation for the streams library is still a little sparse.

This question is about a simple parsing task that I'd like to see implemented using streams instead of iteratees. I'll answer the question myself if nobody else beats me to it, but I'm sure I'm not the only one who's making (or at least considering) this transition, and since I need to work through this exercise anyway, I figured I might as well do it in public.

Task

Supposed I've got a file containing sentences that have been tokenized and tagged with parts of speech:

no UH
, ,
it PRP
was VBD
n't RB
monday NNP
. .

the DT
equity NN
market NN
was VBD
illiquid JJ
. .

There's one token per line, words and parts of speech are separated by a single space, and blank lines represent sentence boundaries. I want to parse this file and return a list of sentences, which we might as well represent as lists of tuples of strings:

List((no,UH), (,,,), (it,PRP), (was,VBD), (n't,RB), (monday,NNP), (.,.))
List((the,DT), (equity,NN), (market,NN), (was,VBD), (illiquid,JJ), (.,.)

As usual, we want to fail gracefully if we hit invalid input or file reading exceptions, we don't want to have to worry about closing resources manually, etc.

An iteratee solution

First for some general file reading stuff (that really ought to be part of the iteratee package, which currently doesn't provide anything remotely this high-level):

import java.io.{ BufferedReader, File, FileReader }
import scalaz._, Scalaz._, effect.IO
import iteratee.{ Iteratee => I, _ }

type ErrorOr[A] = EitherT[IO, Throwable, A]

def tryIO[A, B](action: IO[B]) = I.iterateeT[A, ErrorOr, B](
  EitherT(action.catchLeft).map(I.sdone(_, I.emptyInput))
)

def enumBuffered(r: => BufferedReader) = new EnumeratorT[String, ErrorOr] {
  lazy val reader = r
  def apply[A] = (s: StepT[String, ErrorOr, A]) => s.mapCont(k =>
    tryIO(IO(Option(reader.readLine))).flatMap {
      case None       => s.pointI
      case Some(line) => k(I.elInput(line)) >>== apply[A]
    }
  )
}

def enumFile(f: File) = new EnumeratorT[String, ErrorOr] {
  def apply[A] = (s: StepT[String, ErrorOr, A]) => tryIO(
    IO(new BufferedReader(new FileReader(f)))
  ).flatMap(reader => I.iterateeT[String, ErrorOr, A](
    EitherT(
      enumBuffered(reader).apply(s).value.run.ensuring(IO(reader.close()))
    )
  ))
}

And then our sentence reader:

def sentence: IterateeT[String, ErrorOr, List[(String, String)]] = {
  import I._

  def loop(acc: List[(String, String)])(s: Input[String]):
    IterateeT[String, ErrorOr, List[(String, String)]] = s(
    el = _.trim.split(" ") match {
      case Array(form, pos) => cont(loop(acc :+ (form, pos)))
      case Array("")        => cont(done(acc, _))
      case pieces           =>
        val throwable: Throwable = new Exception(
          "Invalid line: %s!".format(pieces.mkString(" "))
        )

        val error: ErrorOr[List[(String, String)]] = EitherT.left(
          throwable.point[IO]
        )

        IterateeT.IterateeTMonadTrans[String].liftM(error)
    },
    empty = cont(loop(acc)),
    eof = done(acc, eofInput)
  )
  cont(loop(Nil))
}

And finally our parsing action:

val action =
  I.consume[List[(String, String)], ErrorOr, List] %=
  sentence.sequenceI &=
  enumFile(new File("example.txt"))

We can demonstrate that it works:

scala> action.run.run.unsafePerformIO().foreach(_.foreach(println))
List((no,UH), (,,,), (it,PRP), (was,VBD), (n't,RB), (monday,NNP), (.,.))
List((the,DT), (equity,NN), (market,NN), (was,VBD), (illiquid,JJ), (.,.))

And we're done.

What I want

More or less the same program implemented using Scalaz streams instead of iteratees.

by Travis Brown at December 18, 2014 03:48 PM

Logging and ignoring exception from Task in scalaz-streams

Let's take an example from some scalaz-stream docs, but with a theoretical twist.

import scalaz.stream._
import scalaz.concurrent.Task

val converter: Task[Unit] =
  io.linesR("testdata/fahrenheit.txt")
    .filter(s => !s.trim.isEmpty && !s.startsWith("//"))
    .map(line => fahrenheitToCelsius(line.toDouble).toString)
    .intersperse("\n")
    .pipe(text.utf8Encode)
    .to(io.fileChunkW("testdata/celsius.txt"))
    .run

// at the end of the universe...
val u: Unit = converter.run

In this case the file might very well contain some non-double string, and the fahrenheitToCelsius will throw some NumberFormatException. Let's say that in this case we want to maybe log this error and ignore it for further stream processing. What's the idiomatic way of doing it? I've seen some examples, but they usually collectFrom the stream.

by kareblak at December 18, 2014 03:45 PM

How to distribute data to worker nodes

I have a general question regarding Apache Spark and how to distribute data from driver to executors. I load a file with 'scala.io.Source' into collection. Then I parallelize the collection with 'SparkContext.parallelize'. Here begins the issue - when I don't specify the number of partitions, then the number of workers is used as the partitions value, task is sent to nodes and I got the warning that recommended task size is 100kB and my task size is e.g. 15MB (60MB file / 4 nodes). The computation then ends with 'OutOfMemory' exception on nodes. When I parallelize to more partitions (e.g. 600 partitions - to get the 100kB per task). The computations are performed successfully on workers but the 'OutOfMemory' exceptions is raised after some time in the driver. This case, I can open spark UI and observe how te memory of driver is slowly consumed during the computation. It looks like the driver holds everything in memory and doesn't store the intermediate results on disk.

My questions are:

  • Into how many partitions to divide RDD?
  • How to distribute data 'the right way'?
  • How to prevent memory exceptions?
  • Is there a way how to tell driver/worker to swap? Is it a configuration option or does it have to be done 'manually' in program code?

Thanks

by René Kolařík at December 18, 2014 03:36 PM

Lobsters

TheoryOverflow

Research data organization

This is a question in the spirit of this one where I answered that it is important to keep track of what you have done something, why you have done it and what is not working. I personally use notebooks for that purpose, but it has several drawbacks: first I need a lot of storage surface, second when I travel I cannot access my data, and finally this is not collaborative. It has however a strong plus: the notebook can be used as the equivalent of a laboratory notebook (you just have to find somebody to sign each page...).

So, I am interested in knowing how other researchers proceed for that matter. For example, is there any specific software that solve all the issues I mentioned?

by Sylvain Peyronnet at December 18, 2014 03:28 PM

StackOverflow

What should I import for Scalaz' traverse functionalities

In every examples I read about Scalaz' traverse features, the following imports were done:

import scalaz._
import Scalaz._

It seems I can't use traverseU until I import Scalaz._.

How does Scalaz object inject traverseU to my collections? I'm completely lost in the reference doc.

What should I import if I just want the traverse and traverseU methods?

by Antoine at December 18, 2014 03:19 PM

Fefe

Jahrelang galt in der Softwareentwicklung: Wenn der ...

Jahrelang galt in der Softwareentwicklung: Wenn der Updater funktioniert, shippen wir! Egal ob vom "Produkt" nur der Splash-Screen fehlerfrei tut.

Jetzt stellt sich raus: Nicht mal dass der Updater funktioniert ist sicher. Ubisoft hat es geschafft, sogar noch den Updater massiv zu verkacken. Daher müssen sich die Leute jetzt 40 GB für ein "Update" runterladen. Offensichtlich ist das eher ein Neudownload der neuen Version.

December 18, 2014 03:01 PM

CompsciOverflow

Graphical Tools for Drawing Automata

I am beginning a journey into the theory of computation, and I am wondering if there are any free tools available (preferably online ones) that are designed specifically for drawing theoretical structures such as the various types of automata.

by LukeP at December 18, 2014 02:57 PM

/r/netsec

StackOverflow

java.lang.IllegalArgumentException: Unable to resolve classname: FileReader

We are trying to write some Clojure code and we successfully compiled it a couple of minutes ago, but now we get this random exception.

CompilerException java.lang.IllegalArgumentException: Unable to resolve classname: FileReader, compiling:(myproject\core.clj:24:17) 

Here is our code:

(ns myproject.core)

(defmacro safe ([bindings & code] form)
 (if (list? bindings)
 `(try 
   ~bindings 
  (catch Throwable except# except#)) 

(if (= (count bindings) 0)
  `(try ~code 
     (catch Throwable except# except#)) 



`(let ~(subvec bindings 0 2)

 (try
   (safe ~(subvec bindings 2) ~@code)
   (catch Throwable except# except#) 

   (finally
     (. ~(bindings 0) close))))))) ;;safe


(def div(safe (/ 12 2)))
(def v (safe [s (FileReader. (java.io.File. "M:/test.txt"))] (. s read)))

by gjojo at December 18, 2014 02:47 PM

Lobsters

/r/osdev

Alternate High Performance Mobile OS

Hello. I was interested in finding other Operating Systems that work on Mobile Devices(Phone/Tablet). My main reason for this is to find a system without all the fancy colours & unnecessary Animations, so to speak a barebone "spartan" System that focuses mainly on Functionality & Performance and leaves fancy Designs completetly out of the Equation. And extensive Google search yielded little information on any such Systems and so I wanted to ask

submitted by H_Plus_Sejuani
[link] [2 comments]

December 18, 2014 02:41 PM

Lobsters

StackOverflow

Best practices for sharing ansible playbooks in a private team? [on hold]

I'm investigating migrating our current chef based configuration management to ansible.

We build a lot of rails apps that have similar dependencies (ruby, unicorn, nginx, monit, MySQL etc).

So at the moment we're using librarian-chef and private github repos to share our common recipes between the different projects.

I'm new to ansible, the closest thing I've found is ansible galaxy but that seems to be an "out in the open" style thing.

What's the common / best practice for this in a private environment?

Git submodules is the only answer that comes to mind, but it'd be nice to have something a little more automatic like we're used to in the chef world.

by Daniel Upton at December 18, 2014 02:23 PM

Converting Pk[Long] to Option[Long] in a Form

I have been having trouble understanding what the issue is here since the Scala Anorm Pk became deprecated.

I switched my model to the following:

case class Item(id: Option[Long] = NotAssigned,
            title: String,
            descr: String,
            created: Option[Date],
            private val imgs: List[Img],
            private val tags: List[Tag]) 

From id: Pk[Long]

I changed my form to:

val itemForm = Form(
    mapping(
      "id" -> ignored(23L),
      "title" -> nonEmptyText,
      "descr" -> nonEmptyText,
      "created" -> optional(ignored(new Date)),
      "imgs" -> Forms.list(itemImgs),
      "tags" -> Forms.list(itemTags)
    )(Item.apply)(Item.unapply)
)

From "id" -> ignored(NotAssigned:Pk[Long])

But, I get this error.

type mismatch; found : (Option[Long], String, String, scala.math.BigDecimal, Option[java.util.Date], List[models.Img], List[models.Tag]) => models.Item required: (Long, String, String, Option[java.util.Date], List[models.Img], List[models.Tag]) => ? )(Item.apply)(Item.unapply)

Why is an Option[Long] not required on the Item model?

I don't know what 23L is, but that's what was in the Play Documentation. The value of id in the database is coming from a sequence.

If I change it to:

"id" -> ignored(NotAssigned:Option[Long]),

Which makes the most sense to me... I get this error:

type mismatch; found : anorm.NotAssigned.type required: Option[Long] "id" -> ignored(NotAssigned:Option[Long]),

Which makes less sense than before.

by bad at scala at December 18, 2014 02:13 PM

CompsciOverflow

Using A* search with different heuristic values

I am currently trying to use A* to create cyclical routes (for plotting driving routes of set distances). I want to find a driving route from my start location that is as close to my specified length as possible. If it is bigger or smaller that is fine, just as close as possible

I have therefore adapted the heuristic function to:

|target distance-distance Travelled so far| + distance left to travel.

for example, I want to travel 20 miles, so my target distance is 20. At some point in the route i may have travelled 18. Therefore my h would be 20-18+2 = 2. I always pick the smallest h value, the idea being a route of 20 miles will be created finished where i starts.

I had 2 immediate issues with this.

  1. The Start node can't be added to the closed list immediately or it will never create a cyclical route - I believe I have solved this one.
  2. If the search goes into a cul-de-sac it gets stuck and the openlist becomes empty and the search terminates.

The second problem I cannot think of how to solve. It occurs because the successor node that would get the search out of the cul-de-sac is already in the closed list.

Can anyone help me come up with a general solution to my problem? Furthermore do you for see any other issues with my implementation?

would really appreciate any help you can give.

by Programatt at December 18, 2014 02:03 PM

Fefe

Oooooh yes. Ich hoffe ja seit Monaten, dass der Edathy ...

Oooooh yes. Ich hoffe ja seit Monaten, dass der Edathy seine ganzen Parteigenossen in den Orkus mitnimmt. Heute war die lang ersehnte Edathy-Anhörung und es sieht aus, als habe er verbrannte Erde hinterlassen. Er kam da mit einer eidesstattlichen Versicherung an und mit Abschriften der SMSsen seiner Parteikollegen. Und von denen hatte er vorher offenbar keinem was gesagt, jedenfalls haben sie erstmal die Sitzung unterbrochen, um das zu sichten. Mwahahaha

Sein Vorwurf: Methamphetamin-Hartmann war nicht nur seine Quelle sondern hatte ihm auch zu intervenieren versprochen, also zu verhindern, dass das ein Problem wird. Was er dann wohl nicht getan hat. Dann wäre verständlich, dass Edathy jetzt ein Bedürfnis hat, den ans Messer zu liefern.

December 18, 2014 02:01 PM

Halfbakery

/r/netsec

StackOverflow

Retrieve play.api.mvc.Request in a Java controller

I'm currently working on a RESTful API using the play! framework 2.x for an academic project.

I tried to use the Apache Oltu library but as they make intensive usage of HttpServletRequest/Response I wasn't able to use it. Then I found a play Request wrapper to HttpServletRequest but it was for play 1.x. Because I have no knowledge of servlets I wasn't able to write a wrapper by myself so I searched the web for something else.

I'm trying to use the oauth2play2scala library (which is a port of Oltu for play 2.x) to implement an OAuth provider, but I'm facing the problem that the library was written for the Scala API of play while I'm exclusively using Java.

As you can see in the example code from the oauth2play2scala repository, I need to pass the play.api.mvc.Request instance to the OAuthAuthzRequest constructor. All the classes in the play.api package are used in scala, while the classes out of this packages are usable in Java. In order to construct a OAuthAuthzRequest, I need:

  • to retrieve the play.api.mvc.Request instance (from Scala to Java) OR
  • to find a wrapper in order to use a play.mvc.Request (Java) as a play.api.mvc.Request (Scala)
  • another alternative that I didn't thinked about :D

Thanks in advance

by Aureo at December 18, 2014 01:45 PM

PlayFramework: How to merge a sequence of JsValue instances to a single JSON document

Given the following sequence of JsValue instances:

[
    { "name":"j3d" },
    { "location":"Germany" }
]

How do I merge them to a single JSON document like this?

{
    "name":"j3d",
    "location":"Germany"
}

Here below is my Scala code:

import play.api.libs.json._

val values = Seq(Json.obj("name":"j3d"), Json.obj("location":"Germany")

how do I merge all the JSON objects in values?

by j3d at December 18, 2014 01:43 PM

/r/compsci

StackOverflow

Clojure (or any functional language): is there a functional way of building flat lists by a recursive function?

I've got a recursive function building a list:

(defn- traverse-dir
  "Traverses the (source) directory, preorder"
  [src-dir dst-root dst-step ffc!]
  (let [{:keys [options]} *parsed-args*
        uname (:unified-name options)
        [dirs files] (list-dir-groomed (fs/list-dir src-dir))

... recursive call of traverse-dir is the last expression of dir-handler

  (doall (concat (map-indexed (dir-handler) dirs) (map-indexed (file-handler) files))))) ;; traverse-dir

The list, built by traverse-dir, is recursive, while I want a flat one:

flat-list (->> (flatten recursive-list) (partition 2) (map vec))

Is there a way of building the flat list in the first place? Short of using mutable lists, that is.

by Alexey Orlov at December 18, 2014 01:20 PM

Interceptors/Filters in Spray

I'm busy migrating one of our Spring/Groovy applications to Spray/Scala. I'm fairly new to Spray, so forgive me if this is a beginner question.

The objective is to emulate our logging Interceptor, which logs various data for every request/response. There is quite a lot of logic that goes on in this part of the code, so its not a simple log line. Also, I'd like to wrap ALL requests that come in with this logic.

Existing Groovy/Spring Interceptor:

boolean preHandle(HttpServletRequest request, HttpServletResponse response, Object handler {
  //do some logging logic
}


void afterCompletion(HttpServletRequest request, HttpServletResponse response, Object handler, Exception ex) {
  //do some more logging logic
}

My Scala actor looks like this

class BootServiceActor extends Actor with ViewingController with LazyLogging with ViewingWire {

def actorRefFactory = context

implicit val ctx = context.dispatcher

def receive = runRoute(route)(exceptionHandler, RejectionHandler.Default, context,
  RoutingSettings.default, LoggingContext.fromActorRefFactory)
}

by Bruce Lowe at December 18, 2014 01:19 PM

Planet Emacsen

sachachua: Emacs Hangout #3: Emacs can read your mind

We’ve been organizing these Emacs Hangouts as an informal way for folks to get together and swap tips/notes/questions. You can find the previous Hangouts at http://sachachua.com/blog/tag/emacs-hangout/ . In this hangout, we shared tips on Emacs configuration, literate programming, remote access, autocompletion, and web development. And then Jonathan Arkell blew our minds with, well, his mind, demonstrating how he got Mindwave working with Emacs and Org Mode. The next one is scheduled for Jan 9, 2015 (Friday) at 7 PM Toronto time (12 AM GMT) – https://plus.google.com/events/cv3ub5ue6k3fluku7e2rfac161o . Want a different time? Feel free to set up an Emacs Hangout, or contact me (sacha@sachachua.com) and we’ll coordinate something.

Approx. time Topic
0:08 describe-variable
0:12 cycle-spacing
0:14 quelpa, better-defaults
0:18 https://github.com/KMahoney/kpm-list
0:19 org-babel
0:24 noweb
0:27 Beamer, org-present
0:30 Emacsclient
0:32 TRAMP, vagrant, X11 forwarding, git
0:40 Evangelism, Emacs defensiveness
0:42 Code organization
0:47 Cask, Quelpa, el-get
0:54 paradox for listing packages
0:58 Helm, helm-git
1:02 Projectile
1:03 More helm, autocomplete
1:06 Autocomplete and company
1:16 Writing packages, flycheck
1:18 Moving to git, working on Emacs
1:22 Gnus, mu4e, notmuch
1:27 Eww, web browsing
1:28 Web dev tools: skewer-mode, slime, swank-js, web-mode
1:32 o-blog static site generator
1:38 orgaggregate
1:41 EEG data. Emacs can read your mind!

Chat, links:

me 8:07 PM Thanks!
Zachary Kanfer 8:10 PM A description of Emacs’s “describe variable” is here: Examining.html#Examining
JJ Asghar 8:11 PM zachary: thanks! wait wait wait, org-bable can take over your .emacs.d/*.el files?
me 8:18 PM JJ: Yeah, totally! It’s so useful.
JJ Asghar 8:19 PM i need to dig into that
Jacob MacDonald 8:19 PM https://github.com/KMahoney/kpm-list
jay abber 8:23 PM Org mode has functionality for LaTeX/TeX it appears Am I wrong, any ppl here using Emacs for ReST or LaTeX??
jay abber 8:27 PM it is
Jacob MacDonald 8:27 PM I used the PDF export in Org for notes in a math class, since it exports LaTeX nicely.
jay abber 8:27 PM https://www.cs.tufts.edu/~nr/noweb/
me 8:27 PM I’ve been using Org to export ta LaTeX for Beamer output
jay abber 8:27 PM np
jay abber 8:28 PM yeaaah up yup
Jonathan Arkell 8:29 PM Time for a restart.
jay abber 8:29 PM I think it would nice to know who uses emacs mainly graphically or in a terminal?? me = lurker sorry
jay abber 8:30 PM im trying to use it more in a terminal but always go graphic
Jacob MacDonald 8:31 PM emacs –daemon; emacsclient -c
jay abber 8:31 PM yosmiate yosmite me homebrew
jay abber 8:31 PM 24.4
jay abber 8:32 PM I like that
Christopher Done 8:32 PM audio sounds very trippy
jay abber 8:32 PM w/daft punk poster rockiin
Jonathan Arkell 8:33 PM heh! It’s signed too.
JJ Asghar 8:34 PM Sorry guys I have to go! Thanks so much for this!
me 8:34 PM See you!
jay abber 8:34 PM peace or vnc but thats alot of overhead make sure you lock down you sshdconfig files with sane sec practice Emacs over TMUX?????
Christopher Done 8:38 PM https://github.com/chrisdone/chrisdone-emacs/blob/master/packages/resmacro/resmacro.el unrelated, thought i’d share that =p
me 8:38 PM jay: Good suggestions. Want to speak up?
jay abber 8:38 PM lm lurking tonight
Jacob MacDonald 8:38 PM That audio .
jay abber 8:38 PM next one I promise His voice is awesome
Jacob MacDonald 8:40 PM http://www.emacswiki.org/Rudel
jay abber 8:40 PM Well for me sometimes I hate to confess but I just type vi/vim Noone I know uses any type of editor except word hahahaha
Jonathan Arkell 8:40 PM K, i am going to try audio again. Hopefully it will help Was that better?
jay abber 8:41 PM Can emacs do stuff like mpsyt or youtubedl somehow? yes!!!!
Jacob MacDonald 8:42 PM elisp interface to a shell script should work at a bare minimum.
Jacob MacDonald 8:42 PM I mean, there’s a web browser/mail reader/IRC client built in already…
me 8:42 PM I play MP3s in Emacs using emms and mplayer
jay abber 8:43 PM you know what
Jacob MacDonald 8:43 PM There was a Spotify plugin using dbus a while back, I believe.
jay abber 8:43 PM I think mysyt will be fine
Christopher Done 8:43 PM i was thinking of writing an emacs client to gmail via gmail’s API…
jay abber 8:43 PM its is a just a python script and mpv very suave and minimalist both python
Christopher Done 8:45 PM i stick all my own packages and ones i’m using in my repo https://github.com/chrisdone/chrisdone-emacs/tree/master/packages as submodules
me 8:45 PM Christopher: Gmail client might be nice. I use IMAP occasionally, but I miss the priority inbox.
Christopher Done 8:46 PM yeah. i used offlineimap for a while with notmuch.el, that was pretty good. but i’m tempted by the idea of a “light-weight” approach replacing the browser with emacs, requesting emails/search on demand. might be nice their API looked super trivial to work with
Jonathan Arkell 8:48 PM Sorry Yea Is qwelpa (sp?) native emacs? (elisp) Stupid mic. works great for music.
Jacob MacDonald 8:50 PM lol
Jonathan Arkell 8:50 PM I do all my configuration and packages in Org mode
Christopher Done 8:50 PM i just use git for everything =p
me 8:51 PM Jonathan: Oh, maybe you’re doing some kind of audio processing that removes noise or other odd things? &lt;/wild guess&gt;
Jonathan Arkell 8:51 PM Ironically not. I am Launching my DAW now to try and sort it ot.. heh err out … not ot…
jay abber 8:53 PM M=x list-packages now installing org-mode
me 8:53 PM Jay: If you’re installing Org from package, be sure to do it in an Emacs that has not loaded an Org file. because Org 8 (package) and Org 7 (which is built into Emacs) have incompatibilities
jay abber 8:55 PM hmm i installed 24.4 via homebrew
Jonathan Arkell 8:58 PM Okay, I am switching to the built in mic, so hopefully it works. Let me know…
Zachary Kanfer 9:11 PM http://emacsnyc.org/videos.html#2014-05
me 9:12 PM https://github.com/aki2o/emacs-plsense ?
jay abber 9:15 PM Im trying to become a ninja using the shell from w/in Emacs but sometimes I have issues with my ENV and PATH
Jonathan Arkell 9:15 PM OS?
Jacob MacDonald 9:15 PM It’s a thing.
Zachary Kanfer 9:15 PM http://emacs.stackexchange.com/
jay abber 9:15 PM Yosmite like pyenv or rubyenv in HomeBrew yes yes
Jacob MacDonald 9:16 PM Depends on if you use emacs like from brew or Emacs.app.
jay abber 9:16 PM I got cha will find it
Jonathan Hill 9:17 PM great package for handling env variables in and so forth in OSX: exec-path-from-shell
jay abber 9:18 PM jonathan: thanks man
Jonathan Hill 9:18 PM just after (package-initialize), do (exec-path-from-initialize) oops (exec-path-from-shell-initialize)
jay abber 9:18 PM jh: ok
Jonathan Arkell 9:19 PM (setenv “PATH” (concat (getenv “HOME”) “/bin:” “/usr/local/bin:” (getenv “PATH”))) That’s waht i do… (add-to-list ‘exec-path “/usr/local/bin”)
me 9:21 PM http://lars.ingebrigtsen.no/2014/11/13/welcome-new-emacs-developers/
Jonathan Arkell 9:22 PM ERMERGERD +1 +1
Jacob MacDonald 9:32 PM Link please?
Bob Erb 9:33 PM What’s it called?
me 9:33 PM https://github.com/renard/o-blog ?
Jacob MacDonald 9:33 PM Thanks.
me 9:34 PM http://renard.github.io/o-blog/ – docs
jay abber 9:34 PM hey
jay abber 9:34 PM sorry I got side tracks I blog in my posts in REST for pelican static blog generator
jay abber 9:35 PM omg
me 9:35 PM Pretty!
jay abber 9:35 PM elisp for static blog oh know
John Wiegley 9:36 PM Hello
jay abber 9:36 PM https://github.com/renard/o-blog you should never shown that to me
Jacob MacDonald 9:37 PM John, somehow I think I’ve seen you before…
me 9:40 PM https://github.com/tbanel/orgaggregate
Jonathan Arkell 9:44 PM https://github.com/jonnay/mindwave-emacs
jay abber 9:44 PM Hey I have to go now
John Wiegley 9:44 PM Bye Jay
me 9:44 PM See you! Thanks for joining!
jay abber 9:44 PM This was awesome I will be on the next one I have to study precalc
me 9:44 PM Yay!
jay abber 9:45 PM take care
me 9:49 PM Oooh… I wonder how to make coloured graphs like that too. Neat! I should practise using overlays…
Jonathan Arkell 9:53 PM https://github.com/jonnay/mindwave-emacs Here is the Display Code: https://github.com/jonnay/mindwave-emacs/blob/master/mindwave-display.org So wait… C-u C-u C-p takes you… uup?
me 9:59 PM Hah! UUP! Brilliant!
Bob Erb 10:01 PM You’re a treasure, Sacha!

The post Emacs Hangout #3: Emacs can read your mind appeared first on sacha chua :: living an awesome life.

by Sacha Chua at December 18, 2014 01:00 PM

CompsciOverflow

Is there a name for the 'scope tree' organization?

I could describe JQuery as a library that allows you to easily select elements on and traverse the DOM, the DOM would be the name of the tree or organizational structure of the HTML.

When you are describing what JS (or any other language) scope bubbles up from the local scope, to its parent scope, to its parent scope, to its parent scope, ...., until there are no more parent scopes, where you have reached the global Object scope, what would you call this scope organization?

by chris Frisina at December 18, 2014 12:54 PM

Planet Theory

The NIPS Experiment

The NIPS (machine learning) conference ran an interesting experiment this year. They had two separate and disjoint program committees with the submissions split between them. 10% (166) of the submissions were given to both committees. If either committee accepted one of those papers it was accepted to NIPS.

According to an analysis by Eric Price, of those 166, about 16 (about 10%) were accepted by both committees, 43 (26%) by exactly one of the committees and 107 (64%) rejected by both committees. Price notes that of the accepted papers, over half (57%) of them would not have been accepted with a different PC. On the flip side 83% of the rejected papers would still be rejected. More details of the experiment here.

No one who has ever served on a program committee should be surprised by these results. Nor is there anything really wrong or bad going on here. A PC will almost always accept the great papers and almost always reject the mediocre ones, but the middle ground are at a similar quality level and personal tastes come into play. There is no objective perfect ordering of the papers and that's why we task a program committee to make those tough choices. The only completely fair committees would either accept all the papers or reject all the papers.

These results can lead to a false sense of self worth. If your paper is accepted you might think you had a great submission, more likely you had a good submission and got lucky. If your paper was rejected, you might think you had a good submission and was unlucky, more likely you had a mediocre paper that would never get in.

In the few days since NIPS announced these results, I've already seen people try to use them not only to trash program committees but for many other subjective decision making. In the end we have to make choices on who to hire, who to promote and who to give grants. We need to make subjective decisions and those done by our peers aren't always consistent but they work much better than the alternatives. Even the machine learning conference doesn't use machine learning to choose which papers to accept.

by Lance Fortnow (noreply@blogger.com) at December 18, 2014 12:44 PM

TheoryOverflow

Alternating tree automata for arbitrary arity tree

Could alternating tree automata be used for recognizing set (language) of arbitrary-arity trees?

More specifically, as an example: let $\Sigma = \{a,b,c\}$ - labels for tree nodes. Trees from $T$ are of the form: from each node only one of the following children could appear: $\{a,b\},\{b\},\{c\}$.

I.e. tree will be of the form, smth like this ($e$ on picture is just root of the tree, indexes on labels are just features of drawing tool, they aren't related to the question):

enter image description here

by Andremoniy at December 18, 2014 12:15 PM

StackOverflow

Simplifying Scala expression calculating ratio

I'm trying to calculate the aspect ratio of a java.awt.Rectangle in Scala. I'm going for the "ratio of longer side to shorter side" definition of aspect ratio, not the "width to height" type of aspect ratio.

The following code works, but is there any way to avoid temporary variable and turn it into a one-liner?

    val sizes = Seq(rect.getWidth, rect.getHeight)
    val aspectRatio = sizes.max / sizes.min

by My other car is a cadr at December 18, 2014 12:09 PM

How to format numbers using space as separater in Clojure?

For example, I can format using comma as separater:

(format "%,d"(BigInteger. "fffff" 16))
;=> 1,048,575

Is it possible to use space instead:

1 048 575 ?

by Nick at December 18, 2014 12:08 PM

replace break line with another equivalent line in C#

I need to replace the break; line with another equivalent line without using goto, what is the possible solutions?

        static void Main(string[] args)
    {
        int Max = 100;

        for (int i = 0; i < Max; i++)
        {
           Console.Writeline(i);

            if (i == 50)
                break;

        }
        Console.ReadLine();
    }

by Ahmed.Marzouk at December 18, 2014 12:07 PM

Fred Wilson

The Interview Mess

So Sony has decided to pull the plug on The Interview after the major theater chains decided against showing the film.

This is a fascinating story on so many levels. It is not clear  to me who was behind the hacking attack on Sony, but there are some obvious candidates. We are witnessing cyber warfare in real time. And there are real costs involved. Who knows how much Sony has lost or will lose as a result of the hacking incident and all the repercussions. But we do know that The Interview cost $42mm to make and there were “tens of millions” of marketing and distribution costs already spent as well. All of that comes from the article I linked to at the start of this post.

How will this impact the entertainment business going forward? Will they now harden all of their systems? Yes. Will the cybersecurity industry get a boost from this incident? Yes. Will it change how they think about making films and other entertainment? I would have to imagine the answer to that question is yes.

And what of the film itself? Should we allow censorship of this form to exist in our society? Should the film get released in some form?

I think the Internet, which was the source of so much harm to Sony, should also provide the answer to what happens to this film. If I were Sony, I would put the film out on BiTorrent, and any other Internet services that want it. Give it to Netflix if they want it. Give it to iTunes if they want it. Give it to HBO if they want it. Give it to Showtime if they want it. Essentially give the film to the world and let the world, via the Internet, decide what they want to do with it.

Of course this is about money to Sony. $42mm is a lot of money to write off. And it is a lot more than that given all the extra costs. But keeping the film locked away in a vault is also a cost. Both to Sony and to society. It says that the attack worked. I think the best thing Sony can do at this point is give the world the film and let us all decide what we think about it. We should not let cyberterrorist censorship have its way.

by Fred Wilson at December 18, 2014 12:06 PM

StackOverflow

Gatling validate scenarii

Before sending my scenarii to our Jenkins/Gatling instance I would like to validate them on my laptop where I write them. Actually I run gatling locally and stop it as soon as it starts, but this is not optimal and I look for a solution to check them with scalac as Gatling does not have a check option. But when running scalac on scenario I always have

Homepage.scala:1: error: object duration is not a member of package concurrent import scala.concurrent.duration._

How do you validate your scenario, can someone helps me using scalac ?

Thanks !

by Rodolphe at December 18, 2014 11:56 AM

QuantOverflow

Does the correlation of matrices have explanatory power when building a pattern recognition model?

I'm using 8 different variables (with daily observations) with the purpose to compare different months across the historical data. For that purpose I calculate the correlation between each month and the historical months in the data and then calculate the Euclidean distance in order to find the closer month.

Does it make sense? Is there any literature regarding such experiments?

by goncalogc at December 18, 2014 11:53 AM

CompsciOverflow

How to choose normalizing factor in regret matching algorithm

How to choose the normalizing factor for the max regret calculated in the regret matching algorithm? Generally the normalizing factor is the time 'T' but in my case, the utility function values vary from 0 to 5000. So how do I choose the appropriate dividing factor for calculating probabilities for the strategies? (The required equations for regret matching are here)

by BaluRaman at December 18, 2014 11:51 AM

StackOverflow

Routee's messages was not delivered to the Sender

I'm working on a project which is running as a cluster and it was implemented by using akka-scala API. My application has mainly 4 actors. Those are Client, Dispatcher, WorkDistributor, and Worker.

My application.conf as follows,

cluster {
  seed-nodes = [
    "akka.tcp://application@<IP>:2553"
  ]
  roles = ["frontend", "dispatcher", "backend", "frontRouter", "dispRouter", "backRouter"]
auto-down = on
}

actor.deployment {

  lifecycle = on

  /client/router {
    router = round-robin-group
    nr-of-instances = 1
    routees.paths = [
      "/user/workDispatcher/router"
    ]
    cluster {
      allow-local-routees = on
      enabled = on
      use-role = "frontRouter"
    }
  }

  /workDispatcher/router {
    router = round-robin-group
    nr-of-instances = 1
    routees.paths = [
      "/user/workDistributor/router"
    ]
    cluster {
      allow-local-routees = on
      enabled = on
      use-role = "dispRouter"
    }
  }

  /workDistributor/router {
    router = round-robin-pool
    nr-of-instances = 1
    cluster {
      enabled = on
      allow-local-routees = on
      use-role = "backRouter"
    }
  }

}

Workers have been received works from Clients through the routers successfully, but issue is messages which are send by Workers after completing the job is not delivered to the Client. My code for send message to the Client is "sender() ! JobComplete". This message was not delivered to the Client.

Additionally, Dispatcher and WorkDistributor are working as intermediate actors.

I will appreciate for any idea how I can resolve this issue.

Thanks and Regards

by Dhanushka Gayashan at December 18, 2014 11:25 AM

QuantOverflow

Which risk free rate is assumed by market when pricing american options?

I'm just started with finance, so maybe my question is dumb or answered elsewhere. Please guide me to relevant materials.

According to put-call parity more time to expiration means more difference between Put and Call prices Call - Put = Spot - Strike*e^(-r*T) My understanding this is to avoid arbitrage between Stock plus Put vs Call plus Deposit. The arbitrage is avoided by embedding deposit returns into Call price.

Now looking at real prices I do not see large difference between Put and Call options prices even for options which have about a year till expiration which suggest near zero risk-free rate. For example, today data from google:

Stock | Expiration   | Spot   | Strike | Put Bid | Call Bid |
AAPL  | Jan 15, 2016 | 109.41 | 110    | 14.95   | 13.40    |
SBUX  | Jan 15, 2016 | 80.43  | 82.50  | 9.20    | 6.55     |

I calculate risk-free rate, assuming T ~ 1, as r = -ln((Put + Spot - Call)/Strike)

In both cases (AAPL, SBUX) risk free rate is slightly less than 0. By looking at this two questions arise:

  1. Does my calculations correct?
  2. If market assume zero risk free rate does this means call are underpriced? One can still get risk free rate by investing into bonds or saving account. In this case Call plus Deposit will earn more than Stock plus Put since Call price does not have risk-free rate embedded in it.

by averbin at December 18, 2014 11:23 AM

/r/emacs

Workflows for scripting and manipulation on text files

Yesterday I needed to comment out specific lines in some big textfiles. So I identified those lines with M-x occur. This gave me a list of the lines with the line numbers, where I need to make changes, e.g

  • 1: some text
  • 3: some text
  • 34: some text
  • ...
  • n: some text

Then I copy-pasted this output to a new buffer and made two macros to cut out the line numbers to have them in one line. This needed to steps: 1st: <f3> jump to first occurence of":", kill rest of line, move one char back, move down one line<f4> ->apply-to-end-of-buffer-> gave me a list of line numbers, each one in one line move to begin of file 2nd: <f3> jump to end of line, delete rest of line->which moves the next line up, move down one line<f4> -> apply-to-end-of-buffer->gives me a list of line numbers

Last step: write a small script to apply to each line-number i have in my original text buffer the change

(defvar line-numbers '(all the lines numbers from the two macros)) (let (val) (switch-to-buffer "myfile.txt") (dolist (elt line-numbers val) (goto line elt) (insert "#"))) 

I know: I don't really need val.

What other ways of doing jobs like this are you guys using. I know I could have used sed or something, but I was in a emacs / lisp mood that day... Especially what kind of tricks would you apply to cut out line numbers and so on. Also, could I have just applied emacs lisp code to an opened and focused text buffer without explicitly switching to it ?

Thanks for reading!

submitted by paines
[link] [10 comments]

December 18, 2014 10:38 AM

StackOverflow

how to display highlighted search result in a play html template using ElasticSearch

I am using elastic search with playframework ,i am searching a document and displaying the matched resultsin a play template noe i want to use the feature of elasticsearch highlighting filed and i want to display the results with the highligted word which matched the document i am able to get the following response

The search word is elasticSearch and the resulting document is {message=[message], fragments[[learning <em>elasticSearch</em>]]}

The search word is elasticSearch and the resulting document is {message=[message], fragments[[trying out <em>Elasticsearch</em>]]}

The search word is elasticSearch and the resulting document is {message=[message], fragments[[another post about <em>elasticSearch</em>]]}

but i dont know how to display this it in html play framework template

Here is my code searching.scala

object Searching extends Controller{


  val SearchForm = Form(
mapping(
"searchq" -> nonEmptyText(1, 20)

)
(SearchQuery.apply)(SearchQuery.unapply))
val node =nodeBuilder().client(true).node()
val client=node.client()
var o:Object=null

def add = Action {
    Ok(views.html.SearchForm(SearchForm))
  }
 def save =Action {implicit request => 

  SearchForm.bindFromRequest().fold(
      hasErrors => BadRequest(views.html.SearchForm(hasErrors))
        , 
      success => { 
            val response=client.prepareSearch("twitter").setTypes("tweet").setSearchType(SearchType.DFS_QUERY_THEN_FETCH)
              .setQuery(QueryBuilders.termQuery("message", success.searchq))
              .addHighlightedField("message")
              .execute().actionGet()

 val hits = response.getHits

    println("Found %d hits for query '%s'".format(hits.getTotalHits, success.searchq))
    println()

  var li = mutable.MutableList[Object]()
li.clear()    
for(hit <- hits.getHits)
{

     //  li.+=(hit.sourceAsMap()("message"))
li.+=(hit.getHighlightFields)
}

      print("objectvalue is "+o) 

    println("query result isss" +query)
     hits.getHits.foreach(hit =>
      println("source is "+hit.getSource)
    )   
        Ok(views.html.DisplayResult(li))

      }
)

 }

}

DisplayResult.scala.html

@* DisplayResult Template File *@
@import scala.collection._
@(li: mutable.MutableList[Object])

@main("Simpleform") {

@li.length

@for( a <- li){
<h1>The search word i@a </h1>}
<a href="@routes.Searching.add" class="btn">Back</a>


}

by user3801239 at December 18, 2014 10:38 AM

TheoryOverflow

How and why does Recrypt function works?

The general aproach presented by Craig Gentry in 2009 to create a fully-homomorphic encryption system is roughly the follow:

  • Create a scheme that can evaluate some functions (increasing the noise in the ciphertext)

  • Change you decryption function to be one of these functions that can be evaluated

  • Use a function Recrypt to somehow decript and encrypt again the ciphertext to eliminate the noise introduced by the homomorphic operations.

The idea seems wonderfull, but, I don't understand well how and why this Recrypt function works...

For example, in the section 4.3 of the paper Computing Arbitrary Functions of Encrypted Data, he explains it like that:

Imagine that we have a list of public keys $p_1, p_2, .. $ and a private key $s_1$, then, we encrypt $m$ using $p_1$ generating $c_1$.

Then, we encrypt each bit of $s_1$ using $p_2$ generating a vector of ciphertexts $\overline{s_1}$.

Then, Recrypt encrypts each bit of $c_1$ using $p_2$ generating the array $\overline{c_1}$ and evaluate the decryption citcuit $D$ in $\overline{c_1}$, $\overline{s_1}$ and $p_2$.

It seems like recrypt tries to decrypt the $\overline{c_1}$ with a wrong key (since it was encrypted with $p_2$, I was expecting something like $s_2$...).

Could someone here just try to explain how this Recrypt works? I don't know what I'm missing...

If my question is unclear, please, let me know.

Thanks.

by Vitor Lima at December 18, 2014 10:07 AM

/r/netsec

StackOverflow

Functional Creating a list based on values in Scala

I have a task to traverse a sequence of tuples and based on last value in the tuple make 1 or more copies of a case class Item. I can solve this task with foreach and Mutable List. As I'm learning FP and Scala collections could it be done more functional way with immutable collections and high order functions in Scala?

For example, input:

List[("A", 2), ("B", 3), ...]

Output:

List[Item("A"), Item("A"), Item("B"),Item("B"),Item("B"), ...]

by Sayat Stb at December 18, 2014 10:02 AM

Fefe

Die US-Geheimdienste sind hocherfreut über die Sony-Geschichte ...

Die US-Geheimdienste sind hocherfreut über die Sony-Geschichte und nutzen das gleich für ein "Nordkorea war's". Implizit schwingt mit: Haben wir doch gleich gesagt! Der Cyberkrieg kommt und nur wir können euch beschützen! Und jetzt hört endlich mit dem Rumzicken über das Schnorchelprogramm auf!1!!

Ich muss ja sagen, dass Sony unter den Umständen durchaus geschickt agiert hat. Die haben das von "wir waren zu doof" zu "die fiesen Terroristen aus Nordkorea!1!!" umgedreht. Und bei Terroristen, das weiß ja jeder, da ist nicht das Opfer Schuld, sondern da kann man nichts machen!1!!

Der aktuelle Stand ist, dass wegen irgendwelcher nebulöser Drohungen die Kinos der Reihe nach entschieden haben, diesen Film nicht zu spielen. Daraufhin hat Sony den generell aus dem Weihnachtsprogramm gezogen. Eine tolle Werbekampagne für das DVD-Release! Jetzt will doch jeder diesen Film sehen, der Nordkorea zu Terroranschlägen motiviert hat! Am Rande sei an die Kino-Massaker in den USA erwähnt, in denen es tatsächlich Dutzende Tote gab. Bei dem Batman-Film, ihr erinnert euch? Damals hat man Batman weiterlaufen lassen.

December 18, 2014 10:01 AM

CompsciOverflow

The amount of ROM needed to implement a 4-bit multiplier?

For a 4-bit multiplier there are $2^4 \cdot 2^4 = 2^8$ combinations.

The output of 4-bit multiplication is 8 bits, so the amount of ROM needed is $2^8 \cdot 8 = 2048$ bits.

Why is that? Why does the ROM need all the combinations embedded into it?

What will be the case with RAM?

by Ravi Teja at December 18, 2014 09:42 AM

Do all greedy algorithm produce just the first solution, no matter how bad it is?

In all the exampls of the greedy algorithms I've seen so far, such as activity selection problem and unit-sized set coverage problem, the algorithm is usually very simple and intuitive and returns the first set that satisfies the constrain under greedy strategy.

For example, in the activity selection problem, all we need to do is to go down the list and keep finding solution of the type $d_i$ > $f_j$, and the first list that returns (even if it is near empty) is considered the greedy solution.

My question is that this strategy doesn't even take into consideration of any other case which may be better, is this the signature of greedy algorithm?

Thanks

by Math Newb at December 18, 2014 09:33 AM

StackOverflow

How can I combine two case classes and store them into a table

I have two case classes: A, B. I want to combine A and B and store the combination class into a database. For instance:

case class A (a: String, b: Int)
case class B (c: Double, d: Int)

//TODO  class C has the property of A and B
class C(val a:A, val b: B) {}

//Now I want to create a table for C
class Tb(tag: Tag) extends Table[C](tag, "a") {
  def col1 = column[String]("k1")
  def col1 = column[Int]("k2")
  def col1 = column[Double]("k3")
  def col1 = column[Int]("k4")
  def * = //TODO I don't know how to write * function either
}

Could anyone help me to fill "TODO" in the code above?

by worldterminator at December 18, 2014 09:31 AM

TheoryOverflow

Is Software Consist Weight [closed]

i am confuse about that Is software consist Weight or not?

Regards, Arif

by user30323 at December 18, 2014 09:23 AM

CompsciOverflow

How to learn computer science the right way [on hold]

I looked through the sites and this seems to be the best place to ask this question. I am sure it has been asked 10000 times but I am looking for a tailored answer to me and my situation. I currently work as a project manager for a startup. I have a working knowledge into many different fields ranging from HTML, CSS, PHP, JS, MySQL, Server Administration, Marketing, SEO, Etc. I have been working in the web development industry for the last 10 years, I started when I was 11. I have associates level education but no BCS. I don't have the time right now to pursue that as much as I would love to in my current situation(newborn son) and my wife is pursuing her eduction currently. I knew all these language but I don't really know them. I wanna truly understand them, why they work the way they do and how I can master them. This is more of a personal desire then one driven by money, as they most likely wont affect my career. I really would like to learn a new language.

I have about 3-5 hours every week night while I have my son and my wife is in school to study at home. I can't really afford college or guarantee that I will have that time available to make that commitment so I would like to have a no pressure solution. I have completed a lot of treehouse and didn't find much of a challenge. The courses really didnt drive into the languages either and teach me anything new.

I am looking to start learning things like computer logic, but I need some pointers of where to start and what terms to search to find this education info. After I truly understand the inner workings of a computer from the microprocessor to ram to harddrive to interpreter to etc, I would like to learn a new programming language. My current PHP skills arent great, and its more of a working knowledge then understand so treat me as a new programmer.

To sum things up

  • Can I have some recommendations of key concepts to learn for new software engineers
  • If anyone can provide good education sources, please do so
  • I would like some recommendations of languages to learn that are in demand and are new, and arent going anywhere anytime soon. From my current research it looks like Java or C# are my best options. Anything I should know about those, any assume the previous questions apply to them.
  • I am willing to pay for this

by Anthony Accetturo III at December 18, 2014 09:12 AM

Fefe

Erinnert ihr euch an den Dienstlaptop von Edathy mit ...

Erinnert ihr euch an den Dienstlaptop von Edathy mit den anstößigen Bildern drauf? Den er gestohlen gemeldet hat, kurz nachdem seine Wohnung und Büros durchsucht worden waren und kein Beweismaterial gefunden worden war?

Nun, vor ein paar Tagen hatte doch der Edathy den Hartmann als seinen Informanten geouted (der bestreitet das). Und jetzt kommt raus, dass Hartmann schon im März sein Diensthandy als gestohlen gemeldet hatte. Na sowas. Ich wusste ja, dass der Bundestag voller Krimineller ist, aber dass es SO schlimm ist!1!!

December 18, 2014 09:01 AM

Portland Pattern Repository

/r/compsci

How to determine optimal ordering?

Given:

  • A mapping of M keys to N values, where N is about 80% of the size of M
  • The ordering of M is fixed, but N can be ordered arbitrarily
  • The objective is to traverse M and N simultaneously, minimizing backwards seeks

How do you find the optimal ordering of N to accomplish this?

Right now I've got dummy code in place which just orders the items of N according to their first reference in M. However, I'm pretty sure that's suboptimal. In a case where an item of N has one reference early in M, and a bunch towards the end, intuitively you'd want that item to fall towards the end. I'm having trouble coercing this intuition into an algorithm, though.

Any ideas? I'm not too worried about the runtime of the sorting algorithm; read time is much more important, and anyway we're dealing with sets on the order of 100 items.

[edit] clarification: This is for a book, for inventory purposes. I am given an index of items, a large number of photographs of said items, and the mapping between them. Some photographs have more than one item on the picture. A human has to run down the index, and verify that all items in the photo are physically present. Thus, I want to minimize backtracking among the photos as the person runs down the index of items and the list of photos, respectively.

submitted by coriolinus
[link] [9 comments]

December 18, 2014 08:55 AM

Lobsters

StackOverflow

Create a map from a collection using a function

I want to create a map from a collection by providing it a mapping function. It's basically equivalent to what a normal map method does, only I want it to return a Map, not a flat collection.

I would expect it to have a signature like

def toMap[T, S](T => S): Map[T, S]

when invoked like this

val collection = List(1, 2, 3)
val map: Map[Int, String] = collection.toMap(_.toString + " seconds")

the expected value of map would be

Map(1 -> "1 seconds", 2 -> "2 seconds", 3 -> "3 seconds")

The method would be equivalent to

val collection = List(1, 2, 3)
val map: Map[Int, String] = collection.map(x => (x, x.toString + " seconds")).toMap

is there such a method in Scala?

by Zoltán at December 18, 2014 08:30 AM

Haskell, Scala, Clojure, what to choose for high performance pattern matching and concurrency [closed]

I have started work on FP recently after reading a lot of blogs and posts about advantages of FP for concurrent execution and performance. My need for FP has been largely influenced by the application that I am developing, My application is a state based data injector into another subsystem where timing is very crucial (close to a 2 million transactions per sec). I have a couple of such subsystems which needs to be tested. I am seriously considering using FP for its parallelism and want to take the correct approach, many posts on SO talk about disadvantages and advantages of Scala, Haskell and Clojure wrt language constructs, libraries and JVM support. From a language point of view I am ok to learn any language as long as it will help me achieve the result.

Certain posts favor Haskell for pattern matching and simplicity of language, JVM based FP lang have a big advantage with respect to using existing java libraries. JaneStreet is a big OCAML supporter but I am really not sure about developer support and help forums for OCAML.

If anybody has worked with handling such large data, please share your experience.

by 2ndlife at December 18, 2014 08:22 AM

What are the differences between Clojure, Scheme/Racket and Common Lisp?

I know they are dialects of the same family of language called lisp, but what exactly are the differences? Could you give an overview, if possible, covering topics such as syntax, characteristics, features and resources.

by Viclib at December 18, 2014 08:18 AM

Not possible to source .bashrc with Ansible

I can ssh to the remote host and do a source /home/username/.bashrc - everything works fine. However if I do:

- name: source bashrc
  sudo: no
  action: command source /home/username/.bashrc

I get:

failed: [hostname] => {"cmd": ["source", "/home/username/.bashrc"], "failed": true, "rc": 2}
msg: [Errno 2] No such file or directory

I have no idea what I'm doing wrong...

by pldimitrov at December 18, 2014 08:11 AM

TheoryOverflow

How could God authenticate in one message?

        Thought experiment:
Which data could convince experts, beyond reasonable doubts, about their origin outside our universe? From which margin should an expert consider such claim seriously?

For example, if one presented factorization of a billion-numbers run starting at 21024, with proofs of primality of all factors (that wouldn’t be a very large thing neither by amount of information nor by complexity to verify wrt 21st-century standards), it would be spectacular. But who knows which exactly complexity of integer factorization is? Who knows wasn’t factorization of this namely run facilitated by some mathematical coincidences?

But there are many problems with proven lower bounds on complexity that are really prohibitive, some of which hinder even application of powerful quantum computers (still hypothetical), and some problems are algorithmically undecidable in principle.

P.S. please, do not post answers based on trivia about transcomputational problems. Ī’m interested only in answers containing insights about how a (hypothetical) piece of information can be defended against the hypothesis that some (still unknown to experts) mathematics was employed to produce it.


Update: (related to @usul’s answer). We do not consider a totally abstract problem. Alleged “god” may use information from our civilization in input data for the problems solved, such as to use long pieces of “our” predefined data, presumedly random, to convince us that particular input data were not specially arranged.

by Incnis Mrsi at December 18, 2014 08:09 AM

DataTau

StackOverflow

MariaDB won't start on FreeBSD 10

I installed MariaDB on my home server with FreeBSD 10, but when I try to run it I get the following error:

141217 18:30:41 mysqld_safe Starting mysqld daemon with databases from /db/mysql
141217 18:30:41 [ERROR] mysqld: File './mysql-bin.index' not found (Errcode: 13     "Permission denied")
141217 18:30:41 [ERROR] Aborting

141217 18:30:41 [Note] /usr/local/libexec/mysqld: Shutdown complete

141217 18:30:41 mysqld_safe mysqld from pid file /db/mysql/lordkelvin.coselosche.org.pid ended

But I think that the permissions are right

    root@lordkelvin /db/mysql # ls -l
    total 111780
    -rw-rw----  1 mysql  mysql     16384 Dec 17 18:00 aria_log.00000001
    -rw-rw----  1 mysql  mysql        52 Dec 17 18:00 aria_log_control
    -rw-rw----  1 mysql  mysql  50331648 Dec 17 18:00 ib_logfile0
    -rw-rw----  1 mysql  mysql  50331648 Dec 17 18:00 ib_logfile1
    -rw-rw----  1 mysql  mysql  12582912 Dec 17 18:00 ibdata1
    -rw-r-----  1 mysql  mysql      2280 Dec 17 18:30 lordkelvin.coselosche.org.err
    drwx------  2 mysql  mysql        89 Dec 17 18:00 mysql/
    -rw-rw----  1 mysql  mysql     69110 Dec 17 18:00 mysql-bin.000001
    -rw-rw----  1 mysql  mysql    977605 Dec 17 18:00 mysql-bin.000002
    -rwxrwxrwx  1 mysql  mysql        48 Dec 17 18:22 mysql-bin.index*
    -rw-rw----  1 mysql  mysql         9 Dec 17 18:00 mysql-bin.state
    drwx------  2 mysql  mysql        55 Dec 17 18:00 performance_schema/
    drwx------  2 mysql  mysql         2 Dec 17 18:00 test/

Thanks everyone

by linkxvi at December 18, 2014 07:59 AM

/r/clojure

StackOverflow

Applicative vs. monadic combinators and the free monad in Scalaz

A couple of weeks ago Dragisa Krsmanovic asked a question here about how to use the free monad in Scalaz 7 to avoid stack overflows in this situation (I've adapted his code a bit):

import scalaz._, Scalaz._

def setS(i: Int): State[List[Int], Unit] = modify(i :: _)

val s = (1 to 100000).foldLeft(state[List[Int], Unit](())) {
  case (st, i) => st.flatMap(_ => setS(i))
}

s(Nil)

I thought that just lifting a trampoline into StateT should work:

import Free.Trampoline

val s = (1 to 100000).foldLeft(state[List[Int], Unit](()).lift[Trampoline]) {
  case (st, i) => st.flatMap(_ => setS(i).lift[Trampoline])
}

s(Nil).run

But it still blows the stack, so I just posted it as a comment.

Dave Stevens just pointed out that sequencing with the applicative *> instead of the monadic flatMap actually works just fine:

val s = (1 to 100000).foldLeft(state[List[Int], Unit](()).lift[Trampoline]) {
  case (st, i) => st *> setS(i).lift[Trampoline]
}

s(Nil).run

(Well, it's super slow of course, because that's the price you pay for doing anything interesting like this in Scala, but at least there's no stack overflow.)

What's going on here? I don't think there could be a principled reason for this difference, but really I have no idea what could be going on in the implementation and don't have time to dig around at the moment. But I'm curious and it would be cool if someone else knows.

by Travis Brown at December 18, 2014 07:49 AM

TheoryOverflow

Clustering without specifying the number of clusters apriori

Does anyone know of an algorithm that can perform the following tasks:

  1. Unsupervised clustering without specifying the number of clusters apriori. For example if all the buildings in wide geographical are plotted as as points on a 2d plane such an algorithm should be able to identify the number of settlements such as cities, towns and villages, regardless of the size of the settlements.

  2. Thinning the cluster of points to a representative points. In the wide geographical area example this thinning operation would select a number of "representative" dwellings (of minimum size 1, for the smallest settlements) for each settlement.

by Olumide at December 18, 2014 07:46 AM

StackOverflow

freebsd : PHP Fatal error: Call to undefined function __()

I install phpmyadmin by ports and I encounter some problem

os : freebsd9.2

PHP 5.4.34 (cli) (built: Dec 8 2014 06:11:42)

mysql :5.5.40-log Source distribution

there are php modules I have installed

bz2 Core ctype curl date dom ereg fileinfo filter ftp gd gettext hash iconv json libxml mbstring mcrypt mhash mssql mysql mysqli mysqlnd openssl pcre PDFlib PDO pdo_sqlite Phar posix Reflection session SimpleXML sockets SPL sqlite3 standard tokenizer xml xmlreader xmlwriter zip zlib

there is the error-log

[pid 1592:tid 682695936] PHP Fatal error:  Call to undefined function __() in 
/usr/local/www/apache24/phpMyAdmin/libraries/core.lib.php on line 229

and the code in core.lib.php

// these variables are used in the included file libraries/error.inc    .php
229         $error_header = __('Error');
230         $lang = $GLOBALS['available_languages'][$GLOBALS['lang']][1];
231         $dir = $GLOBALS['text_dir'];
232     

What solve method i found is that use phpinfo() to fine session file but the version of php seem haven't the file so I don't how to solve it there is anyone have a good method? thank you!

by yad50968 at December 18, 2014 07:25 AM

Forms in Scala play framework

Hello i am a beginner to scala play framework. I am unable to create form with two or more inputs. i googled it and found none in scala programming language. Kindly suggest me an idea regarding how to create multiple inputs in a form using scala. i did this

val form = Form (tuple
    (
"firstname"-> text,
"lastname" -> text
)
)  and to get the values val(fname,lname) = form.bindFromRequest.get

am i following correct way. Kindly suggest me any idea or resource to learn scala play framework . Thanks in advance

by shashank at December 18, 2014 07:23 AM

JavaScript function that returns function with parameters

I'm doing the tutorials for functional programming on the nodeschool-homepage. I'm new to JS (came from Java) so I don't get some aspects of JS, for example:

function say(word) {
   return function(anotherWord) {
        console.log(anotherWord);
    }
}

If I call:

say("hi"); // it returns nothing

say("hi", "hi"); // it returns nothing

var said = say("hi"); // asignment

said("hi"); // returns hi -- but why?

said(); // returns undefined;

Can someone explain to me how the "hi" in the outer function is passed in the inner function?

by user3919831 at December 18, 2014 07:13 AM

mapping in spark using scala

I have a tuple :

  val key = List (protocol,source,destination,port)

for each rdd. I need to map this to
(protocol ,List(source,destination,port))

which should be then reduced to a list of List(source,(destination1,destination2)) grouping by protocol. Finally it should be like a tuple with

(protocol , (source ,(destination1,destination2))).

The output I need is

{(tcp , (xx.xx.xx.xx ,(ww.ww.w.w,rr.rr.r.r))) , (udp,(yy.yy.yy.yy,(ww.ww.w.w,rr.rr.r.r)))}

by user3823859 at December 18, 2014 07:09 AM

Error making generic slick DAO

I was trying to make a generic dao Class to use in all my DAO objects.

For Instance,I have following autogenerated slick models (in a file dbTables.Tables) -

  case class UserusergroupsRow(userusergroupid: Int, usergroupid: Int, userid: Int, status: String, createdby: Option[String], createddate: Option[java.sql.Timestamp], lastupdatedby: Option[String], lastupdateddate: Option[java.sql.Timestamp])


class Userusergroups(tag: Tag) extends Table[UserusergroupsRow](tag, "userUserGroups") {
def * = (userusergroupid, usergroupid, userid, status, createdby, createddate, lastupdatedby, lastupdateddate) <> (UserusergroupsRow.tupled, UserusergroupsRow.unapply)
/** Maps whole row to an option. Useful for outer joins. */
def ? = (userusergroupid.?, usergroupid.?, userid.?, status.?, createdby, createddate, lastupdatedby, lastupdateddate).shaped.<>({r=>import r._; _1.map(_=> UserusergroupsRow.tupled((_1.get, _2.get, _3.get, _4.get, _5, _6, _7, _8)))}, (_:Any) =>  throw new Exception("Inserting into ? projection not supported."))

/** Database column userUserGroupId PrimaryKey */
val userusergroupid: Column[Int] = column[Int]("userUserGroupId", O.PrimaryKey)
/** Database column userGroupId  */
val usergroupid: Column[Int] = column[Int]("userGroupId")
/** Database column userId  */
val userid: Column[Int] = column[Int]("userId")
/** Database column status  */
val status: Column[String] = column[String]("status")
/** Database column createdBy  */
val createdby: Column[Option[String]] = column[Option[String]]("createdBy")
/** Database column createdDate  */
val createddate: Column[Option[java.sql.Timestamp]] = column[Option[java.sql.Timestamp]]("createdDate")
/** Database column lastUpdatedBy  */
val lastupdatedby: Column[Option[String]] = column[Option[String]]("lastUpdatedBy")
/** Database column lastUpdatedDate  */
val lastupdateddate: Column[Option[java.sql.Timestamp]] = column[Option[java.sql.Timestamp]]("lastUpdatedDate")
}

now I try to make an abstract class

abstract class genericDao[tableClassType<:Table[caseClassType],caseClassType]{ 

//var a:caseClassType
//var b:tableClassType

val tableQuery : TableQuery[tableClassType] 


def getAllfrom  = DB.withTransaction{

 implicit session => tableQuery.list

}

 def saveT(obj:caseClassType) = DB.withTransaction{

 implicit session => tableQuery+=obj

 } 

}

Finally,here is my DAO object

object userDao extends genericDao[Userusergroups,UserusergroupsRow] { 

  val tableQuery = TableQuery[Userusergroups]

  //create a new user

}

But when I try to invoke userDao.getAllfrom it shows error : type arguments [dbTable.Tables.Userusergroups,dbTable.Tables.UserusergroupsRow] do not conform to class genericDao's type parameter bounds [tableClassType <: play.api.db.slick.Config.driver.simple.Table[caseClassType],caseClassType]

by Pradeep Saini at December 18, 2014 07:02 AM

Portland Pattern Repository

TheoryOverflow

example for context free language which satisfy the pumping lemma [on hold]

I'm a beginner to Automata Theory. I found this interesting topic pumping lemma. I know to prove a language is not a context free using pumping lemma. But I didn't found any example for context free language which satisfy the pumping lemma. So can anyone please give example and explain it.

by Menuka Ishan at December 18, 2014 06:53 AM

StackOverflow

How to parse Cursor[JsObject] in scala reactive mongo

I have an API like this in play2.3 - reactive mongo-

 def addEndUser = Action.async(parse.json) { request =>
        val cursor: Cursor[JsObject] = collectionEndUser.find(Json.obj("mobileNumber" -> "9686563240","businessUserId" ->"1")).
        sort(Json.obj("createDate" -> -1)).cursor[JsObject]
        val futureEndUserList: Future[List[JsObject]] = cursor.collect[List]()
        futureEndUserList.map { user =>
            val x:JsObject = obj(Map("endUsers" -> toJson(user) ))
                println(x)
    }
    request.body.validate[User].map { user =>

        val jsonData = Json.obj(
            "businessUserId" ->user.businessUserId,
            "userId" -> user.userId,
            "registrantId" ->user.registrantId,
            "memberId" -> "",
            "name"   -> user.name,
            "currentPoints" -> user.currentPoints,
            "email" -> user.email,
            "mobileNumber" -> user.mobileNumber,
            "mobileCountryCode" ->user.mobileCountryCode,
            "createDate" -> (new java.sql.Timestamp(new Date().getTime)).toString,
            "updateDate" -> (new java.sql.Timestamp(new Date().getTime)).toString,
            "purchasedAmtForRedemption"->user.purchasedAmtForRedemption
        )

        collectionEndUser.insert(jsonData).map { lastError =>
                 Logger.debug(s"Successfully inserted with LastError: $lastError")
                 Created
        }
    }.getOrElse(Future.successful(BadRequest("invalid json")))
}

def findEndUserByUserId(userId: String) = Action.async {
    val cursor: Cursor[JsObject] = collectionEndUser.find(Json.obj("userId" -> userId)).
    sort(Json.obj("createDate" -> -1)).cursor[JsObject]

    val futureEndUserList: Future[List[JsObject]] = cursor.collect[List]()

    //val futureEndUserJsonArray: Future[JsArray] = futureEndUserList.map { endUser =>
        //Json.arr(endUser)
    //}

    futureEndUserList.map { user =>
        Ok(toJson(Map("endUsers" -> toJson(user) )))
    }
}

This API is called as POST method to store those fields in DB. But before adding in the DB, I want to get a value from a collection and use it in one of the fields. All though println(x) is printing the object like this {"endUsers":[{"_id":{"$oid":"543f6912903ec10f48673188"},"businessUserId":"1","createDate":"2014-10-16 12:13:30.771","currentPoints":16.0,"email":"ruthvickms@gmail.com","mobileCountryCode":"+91","mobileNumber":"9686563240","name":"Ruthvick","purchasedAmtForRedemption":50.0,"updateDate":"2014-10-17 20:23:40.725","userId":"5"},{"_id":{"$oid":"543f68c0903ec10f48673187"},"businessUserId":"1","userId":"4","name":"Ruthvick","currentPoints":"0","email":"ruthvickms@gmail.com","mobileNumber":"9686563240","mobileCountryCode":"+91","createDate":"2014-10-16 12:12:08.692","updateDate":"2014-10-16 12:12:08.692","purchasedAmtForRedemption":"0"},{"_id":{"$oid":"543f689e903ec10f48673186"},"businessUserId":"1","userId":"3","name":"Ruthvick","currentPoints":"0","email":"ruthvickms@gmail.com","mobileNumber":"9686563240","mobileCountryCode":"+91","createDate":"2014-10-16 12:11:34.079","updateDate":"2014-10-16 12:11:34.079","purchasedAmtForRedemption":"0"},{"_id":{"$oid":"543f63ef903ec10f48673185"},"businessUserId":"1","userId":"2","name":"Ruthvick","currentPoints":"0","email":"ruthvickms@gmail.com","mobileNumber":"9686563240","mobileCountryCode":"+91","createDate":"2014-10-16 11:51:35.394","updateDate":"2014-10-16 11:51:35.394","purchasedAmtForRedemption":"0"}]}, parsing like this

x.endUsers[0].name is throwing error like

 identifier expected but integer literal found.
    println(x.endUsers[0].name)

Please help me to parse this..I'm a beginner in play framework.

Thanks

by user3777846 at December 18, 2014 06:38 AM

/r/emacs

pycomplete

Hi, I try to use (on mac os, with emacs 24.4) this:

(add-hook 'python-mode-hook '(lambda () (setq ac-sources '(ac-source-pycomplete ac-source-abbrev ac-source-dictionary ac-source-words-in-same-mode-buffers ) ) ) )

If I typing, I have got a message: auto-complete error: (void-variable ac-source-pycomplite)

submitted by b0ris0v
[link] [4 comments]

December 18, 2014 06:12 AM

QuantOverflow

How to reduce the variability of investment returns by increasing average expected return? [on hold]

I will illustrate my question with a simple example:

Say a stock called ABC is currently trading at 100.00, with an average expected return of 0% per year, and has a 50% probability of touching 90.00 anytime within one year.

If we increase the average expected return to 10% per year, what is the new probability of touching 90.00 anytime within one year?

by Golden Goose at December 18, 2014 06:06 AM

StackOverflow

Given sets A and B, is there a standard library function to generate the intersection, A - B, and B - A?

That is, I'm looking for a standard or quasi-standard (Apache Commons, Guava, etc.) library function that will efficiently produce this:

def f[T](oldSet: Set[T], newSet: Set[T]): (Set[T], Set[T], Set[T]) = {
    val removed = oldSet.diff(newSet)
    val kept = oldSet.intersect(newSet)
    val added = newSet.diff(oldSet)
    (removed, kept, added)
}

Obviously this isn't hard to write, and I'm sure I could make a more or less optimal implementation without much effort, but the need for this has come up often enough for me that I'm baffled that there doesn't seem to be a well-known library function to do this. Am I missing something, or does it really not exist?

EDIT: For those who are pointing me to Scala's standard intersect and diff functions and their operator equivalents, I appreciate the thought, but I already know about these, as you can see from the fact that I use them in the example above. I'm looking for a standard library function that is functionally equivalent to and more efficient than the function f() defined above.

by Brandon Berg at December 18, 2014 05:56 AM

Installing SBT on Win 7 64 bit

I want to install Apache Spark for testing purpose. For that I found out that Scala and sbt are necessary. I downloaded scala msi and installed it. For installing sbt I tried various methods but am unable to do so. Can someone tell me what am I doing wrong. What I did is

  1. Install Scala msi
  2. Download sbt msi and install it.
  3. Set sbt_home and path variable to the location where sbt is extracted. Then I opened cmd to check my sbt version by using sbt sbt-version I am getting the following error **unresolved dependency:

org.fusesource.jansi#jansi;1.11: not found   
Error during sbt execution: Error retrieving required libraries  (see C:\Users\ashish-b\.sbt\boot\update.log for complete log)   Error: Could not retrieve jansi 1.11 **

Whats wrong in it?

by Ashish at December 18, 2014 05:56 AM

Scala Play ReactiveMongo - Arbitrary list of query parameters

I'm trying to support arbitrary filters for a REST API that fetches a list of documents from MongoDB. For instance

  • //example.com/users <- list all
  • //example.com/users?age=30 <- all users who are 30
  • //example.com/users?age=30&name=John <- all users who are 30 and called John
  • ...

I'm using Play-ReactiveMongo and dealing with JSONCollection objects only.

So in my routes I put

GET   /users        controllers.Users.list(id: Option[String], name: Option[String], age: Option[Int])

But there are two problem with that, first I'll need to have a pretty long list of optional parameters, and then in my controller I need to use pattern matching on all of them to check whether they're empty or not, and also build the selector that I use to filter my collection.

var filters = JsObject(Nil)
name match {
  case Some(x) => filters += ("name" -> JsString(x))
  case None => None
}

I realized that I can get the full query string from the request object, which is a Map[String, Seq[String]]. But then I don't know a good way to check whether values are String or something else.

Is there another better and idiomatic way to do what I want?

by user2934360 at December 18, 2014 05:52 AM

CompsciOverflow

Greedy proof: Correctness versus optimality

I am really confused after surveying a bunch of material online about correctness versus optimality proof for greedy algorithms. Some website even uses both correctness and optimal in the same sentence!

From my best unconfirmed understanding, the optimal proof uses "greedy stay ahead" where I need to show that greedy algorithm constructs a solution set that is no worse than the optimal set

The correctness proof utilizes the swapping argument to show that any difference between output set A and optimal set OPT can be eliminated by swapping the items in the optimal set.

Can someone clarify if I must only use the "greedy stay ahead" proof method for the optimal proof and not the correctness proof? And must I use the swapping argument (with contradiction) to show that swapping items in the optimal set?

Greedy stay ahead: http://www.cs.cornell.edu/Courses/cs482/2007sp/ahead.pdf

Swapping: http://www.cs.oberlin.edu/~asharp/cs280/handouts/greedy_exchange.pdf (Note that author states that this proves correctness and ends with prove optimality)

Instance where swapping was used to prove optimality, greedy stay ahead used to prove correctness: http://test.scripts.psu.edu/users/d/j/djh300/cmpsc465/notes-4985903869437/notes-5z-unit-5-filled-in.pdf

by Math Newb at December 18, 2014 05:30 AM

Lobsters

StackOverflow

How to properly manage logback configrations in development and production using SBT & Scala?

I have a pretty standard Scalatra project using Logback for logging.

Following the logback manual I have added a logback-test.xml for my development configuration (logs of debugs), whilst maintaining a production logback.xml.

However, in development while using the xsbt-web-plugin to run a container with code reloading, my app only seems to pick up the logback.xml.

How do I get the desired behavior?:

  1. In development mode (./sbt container:start) the app uses logback-test.xml
  2. When assembled into a zip using SBT-assembly, exclude the test config.

Currently neither of these seems to work.

by Matthew Rathbone at December 18, 2014 05:17 AM

Convert map to HashMap and apply transforms

I have a java map, stringMap, where the keys are strings that represent dates and the values are longs. I need to convert this map into a collection.mutable.HashMap[DateTime, Long] That is, in addition to changing the type of map, I need to change from Strings to DateTimes.

This is the solution I am currently using:

import scala.collection.JavaConversions._
val df = DateTimeFormat.forPattern("yyyy-MM-dd'T'HH:mm:ss.SSSZ");
val stringMap: java.util.HashMap[String, Long] = ...
val dateTimeMap = stringMap map { case (k, v) => (df.parseDateTime(k), v)}
val result = collection.mutable.HashMap[DateTime, Long](dateTimeMap.toArray:_*)

The last line comes from this post

Note there is an implicit conversion converting stringMap to a scala Hashmap.

The question is this, is there a better way to do this? I tried using a for yield / the map method with breakout, but they were returning a Map instead of a collection.mutable.HashMap EDIT: Apparently breakout does work. /EDIT

Preferably there would be a functional programming way to do this without the need for a temporary dateTimeMap

by Kevin Wheeler at December 18, 2014 05:16 AM

Halfbakery

CompsciOverflow

Can/Do multiple processes run simultaneously on a multi-core system?

I understand context switches and threading on a single core system, but I'm trying to understand what happens in a multi-core system. I know multiple threads from the same process can run simultaneously in a multi-core system. However, can multiple processes run simultaneously in such a system as well?

In other words, in a dual core processor: - How many processes can run simultaneously(without context switching) if all processes are single threaded? - How many processes can run simultaneously if there are 2 processes and both are multi-threaded?

by TriArc at December 18, 2014 04:51 AM

/r/netsec

Wes Felter

The Networking Nerd: Cisco Just Killed The CLI

The Networking Nerd: Cisco Just Killed The CLI:

Maybe the industry will migrate to netconf. No wait, that’s now owned by Cisco.

December 18, 2014 04:35 AM

StackOverflow

How is this exactly a curried function?

I am reading Functional Programming in Scala, and going through different exericses. I encountered currying.

Can someone explain to me this curried function and how does this work? I can seem to understand this piece of code. compared to the ones I saw on different blogs regarding currying in Scala.

def curry[A,B,C](f: (A, B) => C): A => (B => C) =  a => b => f(a,b)

is a from a => b => f(a,b) a function if so what are its arguments?

Also update what does this mean? a in a => b => f(a,b) means? is a function that will return a function named b that takes single parameter and will return a function?

by user962206 at December 18, 2014 04:27 AM

/r/clojure

Lobsters

StackOverflow

Is anyone having success mixing scala and java files in IntelliJ

Is anyone having success mixing scala and java files in IntelliJ?

I am getting way too many strange errors...

Here:

Cannot Run File Located outside of Main Module IntelliJ 14 Java & Scala

And Here:

IntelliJ 14 Java and Scala ClassNotFoundException

Files are mixed in same module.

Many posts online from -2011 are claiming the same sort of strange issues with mixing in the same project. I feel real uneasy approaching scala if this has remained unresolved for years with no real articles explaining why.

I could be wrong, so I am asking the community for their experience.

by BAR at December 18, 2014 04:07 AM

Portland Pattern Repository

QuantOverflow

Is it possible to model general wrong way risk via concentration risk?

General wrong way risk (GWWR) is defined as due to a positive correlation between the level of exposure and the default probability of the counterparty, due to general market factors. (Specific wrong way risk is when they are positively correlated anyway). According to the “Risk Concentration Principles” (bcbs63) “different entities within the conglomerate could be exposed to the same or similar risk factors or to apparently unrelated risk factors that may interact under some unusual stressful circumstances.”

Given that the different market factors tend have a stronger positive correlation when one is talking about the same country/region(mainly the base curves), the same industry (mainly the spreads), etc, should be the concentration risk (per region, industry,..) be used to model the general wrong way risk?

With 5 regions (Americas, UK, Europe(ex UK), Japan, Asia-Pacific(ex Japan) and 10 sectors (Energy, Basic Materials ,Industrails, consumer Cyclical, consumer Non-Cyclical, Health Care, Financials, Information Techniology, Telecomunication Services and Utilities), you should be able to get the GWWR from a sort of variance of the concentration from the average_of_sectors(ideally 10%) and average_of_regions (ideally 20%). When you have 40% of your exposure in Energy, 30% in Financials 20% in Telecomunication services and 10% in whatever else; well diversified. What I mean is, assuming that the rest of the parameters is all the same (same maturities, types of instruments=bonds -to simplify, pricipals, etc), the GWWR should be much larger for 40-10-40-10 than for 30-30-30-10.

Ex1: A Swiss company receives CHF, buys materials in EUR and takes a loan in EUR to pay them. In case the EUR increases with respect to CHF, both the probability of default of the company (raw materials increase in price) and its exposure in CHF increase. As the default is a statistical property, having 40% of your portfolio as loans provided to many of such companies will make you notice the default (which does not any longer behave idiosyncraticly, as when you would have one company). Assume the lender does not structure its business around the EUR/CHF exchange risk.

Ex2: You are a European lender 10 years ago. People buy houses and earn salaries in the local currency and take mortgages in CHF, as CHF had very low/the lowest interest rates. The CHF rises by 1.25, and the exposure rises by 25%. The probability of default rises, as the price of the house/collateral does not rise in the local currency and the monthly rate to pay goes well over the allowed indebtment percentage.If you are providing many of such mortgages, you are exposed to GWWR proportional to their concentration with respect to your portfolio.

My question is if general wrong way risk is not a form of double counting (Should'nt wrong way risk include only the specific wrong way risk?) Could someone,please, give an example of GWWR where concentration is not a factor?

I guess that one can regress credit risk/hazard rates on market factors and look for strong correlations, but this should already be accounted for by the stressed VaR.

by user7056 at December 18, 2014 03:52 AM

StackOverflow

Akka supervisor catch future Failure

I'm trying to develop an application using Futures and Akka supervisors but when A future returns a Failure to an actor its supervisor is not getting the exception.

Here's my code.

1) Supervisor actor

class TransaccionActorSupervisor() extends Actor with ActorLogging {

  val actor: ActorRef = context.actorOf(Props[TransaccionActor].withRouter(RoundRobinPool(nrOfInstances = 5)), "transaccion-actor")

  def receive = {
    case msg: Any => actor forward msg
  }

  override val supervisorStrategy = OneForOneStrategy() {
    case exception =>
      println("<<<<<<<<<<<<<<<<<<< IN SUPERVISOR >>>>>>>>>>>>>>>>>>>>>>>>>>>>")
      Restart
  }

}

Supervised actor

Class TransaccionActor() extends Actor with ActorLogging {

  implicit val _: ExecutionContext = context.dispatcher
  val transaccionAdapter = (new TransaccionComponentImpl with TransaccionRepositoryComponentImpl).adapter

  def receive = {

    case msg: GetTransaccionById =>
      val currentSender: ActorRef = sender()
      transaccionAdapter.searchTransaction(msg.id).onComplete {
         case Success(transaction) => currentSender ! transaction
         case Failure(error) => throw error
      }

  }

What am I doing wrong?

Thank you all very much!

by Rodrigo Cifuentes Gómez at December 18, 2014 03:33 AM

FreeBSD make error File 5.19 supports only version 12 magic files. /usr/share/misc/magic.mgc is version 8

I'm having trouble reinstalling ProFTPD on a FreeBSD 10.1 setup. The server is newly upgraded from 10.0 to 10.1. When I start the make install clean process, these warnings first shows on screen.

===>  proftpd-1.3.5_4 depends on shared library: libpcre.so
/usr/share/misc/magic, 93: Warning: Printf format `l' is not valid for type `lelong' in description `, %ld pages'
...
/usr/share/misc/magic, 15118: Warning: Printf format `l' is not valid for type `belong' in description `Volume %ld,'
/usr/share/misc/magic, 15609: Warning: Current entry does not yet have a description for adding a MIME type
file: File 5.19 supports only version 12 magic files. `/usr/share/misc/magic.mgc' is   version 8
[: =: unexpected operator
- not found

And after i while, the make process stops with this error:

/bin/ln -s libpcre.so.1 /usr/ports/devel/pcre/work/stage/usr/local/lib/libpcre.so.3
====> Compressing man pages (compress-man)
===>  Installing for pcre-8.35_2
===>  Checking if pcre already installed
===>  pcre-8.35_2 is already installed
You may wish to ``make deinstall'' and install this port again
by ``make reinstall'' to upgrade it properly.
If you really wish to overwrite the old port of pcre
without deleting it first, set the variable "FORCE_PKG_REGISTER"
in your environment or the "make install" command line.
*** Error code 1
Stop.
make[3]: stopped in /usr/ports/devel/pcre
*** Error code 1

Stop.
make[2]: stopped in /usr/ports/devel/pcre
*** Error code 1

Stop.
make[1]: stopped in /usr/ports/ftp/proftpd
*** Error code 1

Stop.
make: stopped in /usr/ports/ftp/proftpd

Seems that the file /usr/share/misc/magic.mgc is of a wrong version? This might happens when I was upgrading from 10.0-RELEASE-p12 to 10.1-RELESE-p1?

If i run make install clean of the ProFTP port, and disable support for pcre, the process and install is successful. But I believe that something is still broken?

My programming skills are limited, and also this level of error. Please let me know if you have any ideas,

Thanks,

by Alldo at December 18, 2014 03:13 AM

UnixOverflow

LDAP Authentication on OpenBSD

I'm trying to get an OpenBSD server to authenticate users using the same LDAP server the rest of my home network uses. While 'getent password' lists the users from the LDAP server as expected, I cannot log in as any of them.

I have following n my /etc/login.conf:

#
# ldap
#
ldap:\
        :auth=-ldap:\
        :x-ldap-server=kaitain.cory.albrecht.name,389,plain:\
        :x-ldap-basedn=ou=People,dc=cory,dc=albrecht,dc=name:\
        :x-ldap-filter=(&(objectclass=posixAccount)(uid=%u)):\
        :tc=default:

But when I try to test a user, I get the following:

root@opensecrets:/etc# /usr/libexec/auth/login_-ldap -d -s login cory ldap
Password: 
load_ssl_certs says:
        cacert none
        cacertdir none
        usercert none
        userkey none
parse_server_line buf = host
parse_server_line port == NULL, will use default
parse_server_line mode == NULL, will use default
host host, port 389, version 3
setting cert info
clearing ssl set
ldap_open(host, 389) failed
host failed, trying alternates
ldap_open failed
reject

That plus these lines from the log

Dec 17 15:30:19 <auth.warn> opensecrets.cory.albrecht.name opensecrets login_ldap: ldap_open(host, 389) failed
Dec 17 15:30:19 <auth.warn> opensecrets.cory.albrecht.name opensecrets login_ldap: ldap_open failed

make me think some how /etc/login.conf's ldap section is not being read. Changing the 389 for the port to 38389 in login.conf makes no change to the output of the faked/testing login.

I'm completely stumped as to what to do to determine what is causing login.conf to not be fully parsed and I'm kind of hoping it's something easy at which I will be terribly embarrassed for forgetting it.

by Bytor at December 18, 2014 03:08 AM

StackOverflow

Pattern match map on materialized observable

Say I want to convert an Observable[T] into an Observable[Try[T]]. I wanted to pattern-match over the materialized original observable, but I don't know what to return for OnCompleted():

obs.materialized map {
  case OnNext(v) => Success(v)
  case OnError(t) => Failure(t)
  case OnCompleted() => // What do I return here?
}
  • in general I don't understand how I can map on materialized observables when OnCompleted is a case that doesn't actually correspond to an "element" of the observable.

by Zoltán at December 18, 2014 03:04 AM

Portland Pattern Repository

QuantOverflow

Stress Testing Methods

I'm working on the following task:

Given quarterly data:

  1. a time series representing the 1-year realized (10 years of data) rates of default on a portfolio of mortgages
  2. a slew of realized (10 years of data) macroeconomic time series. Each time series may or may not be relevant
  3. A stressed scenario of those same macroeconomic time series for 2 years

Estimate the probability of default using the stressed data.

I don't actually know anything about underlying distributions. The only data I have for inference are these time series.

My initial approach was something like this: I would first make every time series stationary. Then eliminate macroeconomic variables that were not significantly correlated with my dependent variable. Then use a stepwise method to determine the best variables to use in a linear regression. Then I would include those exogenous variables while fitting an ARIMA model. Along the way I would do several tests (e.g., autocorrelation, multicollinearity, stationarity, etc.). Then use that model for prediction.

Note that I actually have several different "portfolios" which I am fitting. Using my above procedure, some of the stressed scenarios appear unreasonable. So, I began looking for totally different alternatives. Are there any suggestions?

I realize this is an unreasonably broad question. To narrow the scope, I've done some brief research and believe some viable alternatives might include:

  • Calibrating some dynamic transition densities using Bayesian inference and MCMC
  • Calibrating a conditional Vasicek model that allows of autocorrelation

The problem is, I'm not too familiar with these methods and would want to make efficient use of my time.

Would you suggest I attempt implementing these alternatives? Or some other alternative?

Do you have any advice for implementation in R?

Thank you!

by nsw at December 18, 2014 02:52 AM

Lobsters

StackOverflow

Add element to JsValue?

I'm trying to add in a new element to a JsValue, but I'm not sure how to go about it.

 val rJson = Json.parse(response)
 val imgId = //stuff to get the id :Long
 rJson.apply("imgId", imgId) 
 Json.stringify(rJson)

Should I be converting to a JSONObject or is there some method that can be applied directly to the JsValue to insert a new element to the JSON?

Edit:

response is coming from another server, but I do have control over it. So, if I need to add an empty "imgId" element to the JSON Object, that's fine.

by bad at scala at December 18, 2014 02:08 AM

QuantOverflow

Overnight charges for brokers holding stocks [migrated]

I'm trying to learn about stock markets. I eventually want to invest a small amount over a long period. I notice on a lot of broker sites they charge an overnight fee for any stock held over night. I presume this is every night thereafter?

I just wondered because It seems like a wall when you want to let your stock price grow and an overnight charge eats into this. OR is this only for CDF stocks? I'm confused.

Any help?

Cheers

by Karri at December 18, 2014 02:07 AM

Planet Theory

Size sensitive packing number for Hamming cube and its consequences

Authors: Kunal Dutta, Arijit Ghosh
Download: PDF
Abstract: We prove a size-sensitive version of Haussler's Packing lemma~\cite{Haussler92spherepacking} for set-systems with bounded primal shatter dimension, which have an additional {\em size-sensitive property}. This answers a question asked by Ezra~\cite{Ezra-sizesendisc-soda-14}. We also partially address another point raised by Ezra regarding overcounting of sets in her chaining procedure. As a consequence of these improvements, we get an improvement on the size-sensitive discrepancy bounds for set systems with the above property. Improved bounds on the discrepancy for these special set systems also imply an improvement in the sizes of {\em relative $(\varepsilon, \delta)$-approximations} and $(\nu, \alpha)$-samples.

December 18, 2014 01:41 AM

Shallow Packings in Geometry

Authors: Esther Ezra
Download: PDF
Abstract: We refine the bound on the packing number, originally shown by Haussler, for shallow geometric set systems. Specifically, let $\V$ be a finite set system defined over an $n$-point set $X$; we view $\V$ as a set of indicator vectors over the $n$-dimensional unit cube. A $\delta$-separated set of $\V$ is a subcollection $\W$, s.t. the Hamming distance between each pair $\uu, \vv \in \W$ is greater than $\delta$, where $\delta > 0$ is an integer parameter. The $\delta$-packing number is then defined as the cardinality of the largest $\delta$-separated subcollection of $\V$. Haussler showed an asymptotically tight bound of $\Theta((n/\delta)^d)$ on the $\delta$-packing number if $\V$ has VC-dimension (or \emph{primal shatter dimension}) $d$. We refine this bound for the scenario where, for any subset, $X' \subseteq X$ of size $m \le n$ and for any parameter $1 \le k \le m$, the number of vectors of length at most $k$ in the restriction of $\V$ to $X'$ is only $O(m^{d_1} k^{d-d_1})$, for a fixed integer $d > 0$ and a real parameter $1 \le d_1 \le d$ (this generalizes the standard notion of \emph{bounded primal shatter dimension} when $d_1 = d$). In this case when $\V$ is "$k$-shallow" (all vector lengths are at most $k$), we show that its $\delta$-packing number is $O(n^{d_1} k^{d-d_1}/\delta^d)$, matching Haussler's bound for the special cases where $d_1=d$ or $k=n$. As an immediate consequence we conclude that set systems of halfspaces, balls, and parallel slabs defined over $n$ points in $d$-space admit better packing numbers when $k$ is smaller than $n$. Last but not least, we describe applications to (i) spanning trees of low total crossing number, and (ii) geometric discrepancy, based on previous work by the author.

December 18, 2014 01:41 AM

Spiral Toolpaths for High-Speed Machining of 2D Pockets with or without Islands

Authors: Mikkel Abrahamsen
Download: PDF
Abstract: We describe new methods for the construction of spiral toolpaths for high-speed machining. In the simplest case, our method takes a polygon as input and a number $\delta>0$ and returns a spiral starting at a central point in the polygon, going around towards the boundary while morphing to the shape of the polygon. The spiral consists of linear segments and circular arcs, it is $G^1$ continuous, it has no self-intersections, and the distance from each point on the spiral to each of the neighboring revolutions is at most $\delta$. Our method has the advantage over previously described methods that it is easily adjustable to the case where there is an island in the polygon to be avoided by the spiral. In that case, the spiral starts at the island and morphs the island to the outer boundary of the polygon. It is shown how to apply that method to make significantly shorter spirals in polygons with no islands. Finally, we show how to make a spiral in a polygon with multiple islands by connecting the islands into one island.

December 18, 2014 01:41 AM

Solving Totally Unimodular LPs with the Shadow Vertex Algorithm

Authors: Tobias Brunsch, Anna Großwendt, Heiko Röglin
Download: PDF
Abstract: We show that the shadow vertex simplex algorithm can be used to solve linear programs in strongly polynomial time with respect to the number $n$ of variables, the number $m$ of constraints, and $1/\delta$, where $\delta$ is a parameter that measures the flatness of the vertices of the polyhedron. This extends our recent result that the shadow vertex algorithm finds paths of polynomial length (w.r.t. $n$, $m$, and $1/\delta$) between two given vertices of a polyhedron.

Our result also complements a recent result due to Eisenbrand and Vempala who have shown that a certain version of the random edge pivot rule solves linear programs with a running time that is strongly polynomial in the number of variables $n$ and $1/\delta$, but independent of the number $m$ of constraints. Even though the running time of our algorithm depends on $m$, it is significantly faster for the important special case of totally unimodular linear programs, for which $1/\delta\le n$ and which have only $O(n^2)$ constraints.

December 18, 2014 01:41 AM

The switch Markov chain for sampling irregular graphs

Authors: Catherine Greenhill
Download: PDF
Abstract: The problem of efficiently sampling from a set of(undirected) graphs with a given degree sequence has many applications. One approach to this problem uses a simple Markov chain, which we call the switch chain, to perform the sampling. The switch chain is known to be rapidly mixing for regular degree sequences. We prove that the switch chain is rapidly mixing for any degree sequence with minimum degree at least 1 and with maximum degree $d_{\max}$ which satisfies $3\leq d_{\max}\leq \frac{1}{4}\, \sqrt{M}$, where $M$ is the sum of the degrees. The mixing time bound obtained is only an order of $n$ larger than that established in the regular case, where $n$ is the number of vertices.

December 18, 2014 01:41 AM

Optimal-Depth Sorting Networks

Authors: Daniel Bundala, Michael Codish, Luís Cruz-Filipe, Peter Schneider-Kamp, Jakub Závodný
Download: PDF
Abstract: We solve a 40-year-old open problem on the depth optimality of sorting networks. In 1973, Donald E. Knuth detailed, in Volume 3 of "The Art of Computer Programming", sorting networks of the smallest depth known at the time for n =< 16 inputs, quoting optimality for n =< 8. In 1989, Parberry proved the optimality of the networks with 9 =< n =< 10 inputs. In this article, we present a general technique for obtaining such optimality results, and use it to prove the optimality of the remaining open cases of 11 =< n =< 16 inputs. We show how to exploit symmetry to construct a small set of two-layer networks on n inputs such that if there is a sorting network on n inputs of a given depth, then there is one whose first layers are in this set. For each network in the resulting set, we construct a propositional formula whose satisfiability is necessary for the existence of a sorting network of a given depth. Using an off-the-shelf SAT solver we show that the sorting networks listed by Knuth are optimal. For n =< 10 inputs, our algorithm is orders of magnitude faster than the prior ones.

December 18, 2014 01:40 AM

CompsciOverflow

Why the analysis of Aloha protocol uses Poisson distribution?

Pretty much in all of the analysis of the Aloha protocol that I read, it is assumed that the distribution of packet arrivals is Poisson. What is the rationale behind it? Isn't it actually binomial distribution that is approximated by Poisson because $n$ (number of users) is large and $p$ (transmission probability) is close to zero?

by Mohsen at December 18, 2014 01:40 AM

/r/netsec

arXiv Networking and Internet Architecture

From Co- Toward Multi-Simulation of Smart Grids based on HLA and FMI Standards. (arXiv:1412.5571v1 [cs.NI])

In this article, a multi-simulation model is proposed to measure the performance of all Smart Grid perspectives as defined in the IEEE P2030 standard. As a preliminary implementation, a novel information technology (IT) and communication multi-simulator is developed following an High Level Architecture (HLA). To illustrate the usefulness of such a multi-simulator, a case study of a distribution network operation application is presented using real-world topology configurations with realistic communication traffic based on IEC 61850. The multi-simulator allows to quantify, in terms of communication delay and system reliability, the impacts of aggregating all traffic on a low-capacity wireless link based on Digital Mobile Radio (DMR) when a Long Term Evolution (LTE) network failure occurs. The case study illustrates that such a multi-simulator can be used to experiment new smart grid mechanisms and verify their impacts on all smart grid perspectives in an automated manner. Even more importantly, multi-simulation can prevent problems before modifying/upgrading a smart grid and thus potentially reduce utility costs.

by <a href="http://arxiv.org/find/cs/1/au:+Levesque_M/0/1/0/all/0/1">Martin L&#xe9;vesque</a>, <a href="http://arxiv.org/find/cs/1/au:+Bechet_C/0/1/0/all/0/1">Christophe B&#xe9;chet</a>, <a href="http://arxiv.org/find/cs/1/au:+Suignard_E/0/1/0/all/0/1">Eric Suignard</a>, <a href="http://arxiv.org/find/cs/1/au:+Maier_M/0/1/0/all/0/1">Martin Maier</a>, <a href="http://arxiv.org/find/cs/1/au:+Picault_A/0/1/0/all/0/1">Anne Picault</a>, <a href="http://arxiv.org/find/cs/1/au:+Joos_G/0/1/0/all/0/1">G&#xe9;za Jo&#xf3;s</a> at December 18, 2014 01:30 AM

Backtest of Trading Systems on Candle Charts. (arXiv:1412.5558v1 [q-fin.TR])

In this paper we try to design the necessary calculation needed for backtesting trading systems when only candle chart data are available. We lay particular emphasis on situations which are not or not uniquely decidable and give possible strategies to handle such situations.

by <a href="http://arxiv.org/find/q-fin/1/au:+Maier_Paape_S/0/1/0/all/0/1">Stanislaus Maier-Paape</a>, <a href="http://arxiv.org/find/q-fin/1/au:+Platen_A/0/1/0/all/0/1">Andreas Platen</a> at December 18, 2014 01:30 AM

Standing Together for Reproducibility in Large-Scale Computing: Report on reproducibility@XSEDE. (arXiv:1412.5557v1 [cs.DC])

This is the final report on reproducibility@xsede, a one-day workshop held in conjunction with XSEDE14, the annual conference of the Extreme Science and Engineering Discovery Environment (XSEDE). The workshop's discussion-oriented agenda focused on reproducibility in large-scale computational research. Two important themes capture the spirit of the workshop submissions and discussions: (1) organizational stakeholders, especially supercomputer centers, are in a unique position to promote, enable, and support reproducible research; and (2) individual researchers should conduct each experiment as though someone will replicate that experiment. Participants documented numerous issues, questions, technologies, practices, and potentially promising initiatives emerging from the discussion, but also highlighted four areas of particular interest to XSEDE: (1) documentation and training that promotes reproducible research; (2) system-level tools that provide build- and run-time information at the level of the individual job; (3) the need to model best practices in research collaborations involving XSEDE staff; and (4) continued work on gateways and related technologies. In addition, an intriguing question emerged from the day's interactions: would there be value in establishing an annual award for excellence in reproducible research?

by <a href="http://arxiv.org/find/cs/1/au:+James_D/0/1/0/all/0/1">Doug James</a>, <a href="http://arxiv.org/find/cs/1/au:+Wilkins_Diehr_N/0/1/0/all/0/1">Nancy Wilkins-Diehr</a>, <a href="http://arxiv.org/find/cs/1/au:+Stodden_V/0/1/0/all/0/1">Victoria Stodden</a>, <a href="http://arxiv.org/find/cs/1/au:+Colbry_D/0/1/0/all/0/1">Dirk Colbry</a>, <a href="http://arxiv.org/find/cs/1/au:+Rosales_C/0/1/0/all/0/1">Carlos Rosales</a>, <a href="http://arxiv.org/find/cs/1/au:+Fahey_M/0/1/0/all/0/1">Mark Fahey</a>, <a href="http://arxiv.org/find/cs/1/au:+Shi_J/0/1/0/all/0/1">Justin Shi</a>, <a href="http://arxiv.org/find/cs/1/au:+Silva_R/0/1/0/all/0/1">Rafael F. Silva</a>, <a href="http://arxiv.org/find/cs/1/au:+Lee_K/0/1/0/all/0/1">Kyo Lee</a>, <a href="http://arxiv.org/find/cs/1/au:+Roskies_R/0/1/0/all/0/1">Ralph Roskies</a>, <a href="http://arxiv.org/find/cs/1/au:+Loewe_L/0/1/0/all/0/1">Laurence Loewe</a>, <a href="http://arxiv.org/find/cs/1/au:+Lindsey_S/0/1/0/all/0/1">Susan Lindsey</a>, <a href="http://arxiv.org/find/cs/1/au:+Kooper_R/0/1/0/all/0/1">Rob Kooper</a>, <a href="http://arxiv.org/find/cs/1/au:+Barba_L/0/1/0/all/0/1">Lorena Barba</a>, <a href="http://arxiv.org/find/cs/1/au:+Bailey_D/0/1/0/all/0/1">David Bailey</a>, <a href="http://arxiv.org/find/cs/1/au:+Borwein_J/0/1/0/all/0/1">Jonathan Borwein</a>, <a href="http://arxiv.org/find/cs/1/au:+Corcho_O/0/1/0/all/0/1">Oscar Corcho</a>, <a href="http://arxiv.org/find/cs/1/au:+Deelman_E/0/1/0/all/0/1">Ewa Deelman</a>, <a href="http://arxiv.org/find/cs/1/au:+Dietze_M/0/1/0/all/0/1">Michael Dietze</a>, <a href="http://arxiv.org/find/cs/1/au:+Gilbert_B/0/1/0/all/0/1">Benjamin Gilbert</a>, <a href="http://arxiv.org/find/cs/1/au:+Harkes_J/0/1/0/all/0/1">Jan Harkes</a>, <a href="http://arxiv.org/find/cs/1/au:+Keele_S/0/1/0/all/0/1">Seth Keele</a>, <a href="http://arxiv.org/find/cs/1/au:+Kumar_P/0/1/0/all/0/1">Praveen Kumar</a>, <a href="http://arxiv.org/find/cs/1/au:+Lee_J/0/1/0/all/0/1">Jong Lee</a>, <a href="http://arxiv.org/find/cs/1/au:+Linke_E/0/1/0/all/0/1">Erika Linke</a>, <a href="http://arxiv.org/find/cs/1/au:+Marciano_R/0/1/0/all/0/1">Richard Marciano</a>, <a href="http://arxiv.org/find/cs/1/au:+Marini_L/0/1/0/all/0/1">Luigi Marini</a>, <a href="http://arxiv.org/find/cs/1/au:+Mattman_C/0/1/0/all/0/1">Chris Mattman</a>, <a href="http://arxiv.org/find/cs/1/au:+Mattson_D/0/1/0/all/0/1">Dave Mattson</a>, <a href="http://arxiv.org/find/cs/1/au:+McHenry_K/0/1/0/all/0/1">Kenton McHenry</a>, <a href="http://arxiv.org/find/cs/1/au:+McLay_R/0/1/0/all/0/1">Robert McLay</a>, <a href="http://arxiv.org/find/cs/1/au:+Miguez_S/0/1/0/all/0/1">Sheila Miguez</a>, <a href="http://arxiv.org/find/cs/1/au:+Minsker_B/0/1/0/all/0/1">Barbara Minsker</a>, <a href="http://arxiv.org/find/cs/1/au:+Perez_Hernandez_M/0/1/0/all/0/1">Maria Perez-Hernandez</a>, <a href="http://arxiv.org/find/cs/1/au:+Ryan_D/0/1/0/all/0/1">Dan Ryan</a>, <a href="http://arxiv.org/find/cs/1/au:+Rynge_M/0/1/0/all/0/1">Mats Rynge</a>, <a href="http://arxiv.org/find/cs/1/au:+Santana_Perez_I/0/1/0/all/0/1">Idafen Santana-Perez</a>, <a href="http://arxiv.org/find/cs/1/au:+Satyanarayanan_M/0/1/0/all/0/1">Mahadev Satyanarayanan</a>, <a href="http://arxiv.org/find/cs/1/au:+Clair_G/0/1/0/all/0/1">Gloriana St. Clair</a>, <a href="http://arxiv.org/find/cs/1/au:+Webster_K/0/1/0/all/0/1">Keith Webster</a>, <a href="http://arxiv.org/find/cs/1/au:+Hovig_E/0/1/0/all/0/1">Elvind Hovig</a>, <a href="http://arxiv.org/find/cs/1/au:+Katz_D/0/1/0/all/0/1">Dan Katz</a>, <a href="http://arxiv.org/find/cs/1/au:+Kay_S/0/1/0/all/0/1">Sophie Kay</a>, <a href="http://arxiv.org/find/cs/1/au:+Sandve_G/0/1/0/all/0/1">Geir Sandve</a>, <a href="http://arxiv.org/find/cs/1/au:+Skinner_D/0/1/0/all/0/1">David Skinner</a>, <a href="http://arxiv.org/find/cs/1/au:+Allen_G/0/1/0/all/0/1">Gabrielle Allen</a>, <a href="http://arxiv.org/find/cs/1/au:+Cazes_J/0/1/0/all/0/1">John Cazes</a>, <a href="http://arxiv.org/find/cs/1/au:+Cho_K/0/1/0/all/0/1">Kym Won Cho</a>, <a href="http://arxiv.org/find/cs/1/au:+Fonseca_J/0/1/0/all/0/1">Jim Fonseca</a>, <a href="http://arxiv.org/find/cs/1/au:+Hwang_L/0/1/0/all/0/1">Lorraine Hwang</a>, <a href="http://arxiv.org/find/cs/1/au:+Koesterke_L/0/1/0/all/0/1">Lars Koesterke</a>, <a href="http://arxiv.org/find/cs/1/au:+Patel_P/0/1/0/all/0/1">Pragnesh Patel</a>, <a href="http://arxiv.org/find/cs/1/au:+Pouchard_L/0/1/0/all/0/1">Line Pouchard</a>, <a href="http://arxiv.org/find/cs/1/au:+Seidel_E/0/1/0/all/0/1">Ed Seidel</a>, et al. (1 additional author not shown) at December 18, 2014 01:30 AM

Kickstarting High-performance Energy-efficient Manycore Architectures with Epiphany. (arXiv:1412.5538v1 [cs.AR])

In this paper we introduce Epiphany as a high-performance energy-efficient manycore architecture suitable for real-time embedded systems. This scalable architecture supports floating point operations in hardware and achieves 50 GFLOPS/W in 28 nm technology, making it suitable for high performance streaming applications like radio base stations and radar signal processing. Through an efficient 2D mesh Network-on-Chip and a distributed shared memory model, the architecture is scalable to thousands of cores on a single chip. An Epiphany-based open source computer named Parallella was launched in 2012 through Kickstarter crowd funding and has now shipped to thousands of customers around the world.

by <a href="http://arxiv.org/find/cs/1/au:+Olofsson_A/0/1/0/all/0/1">Andreas Olofsson</a>, <a href="http://arxiv.org/find/cs/1/au:+Nordstrom_T/0/1/0/all/0/1">Tomas Nordstr&#xf6;m</a>, <a href="http://arxiv.org/find/cs/1/au:+Ul_Abdin_Z/0/1/0/all/0/1">Zain Ul-Abdin</a> at December 18, 2014 01:30 AM

The Shapley group value. (arXiv:1412.5429v1 [math.OC])

Following the original interpretation of the Shapley value (Shapley, 1953a) as a priori evaluation of the prospects of a player in a multi-person interaction situation, we propose a group value, which we call the Shapley group value, as a priori evaluation of the prospects of a group of players in a coalitional game when acting as a unit. We study its properties and we give an axiomatic characterization. Relaying on this valuation we analyze the profitability of a group. We motivate our proposal by means of some relevant applications of the Shapley group value, when it is used as an objective function by a decisionmaker who is trying to identify an optimal group of agents in a framework in which agents interact and the attained benefit can be modeled bymeans of a transferable utility game. As an illustrative examplewe analyze the problem of identifying the set of key agents in a terrorist network.

by <a href="http://arxiv.org/find/math/1/au:+Flores_R/0/1/0/all/0/1">Ram&#xf3;n Flores</a>, <a href="http://arxiv.org/find/math/1/au:+Molina_E/0/1/0/all/0/1">Elisenda Molina</a>, <a href="http://arxiv.org/find/math/1/au:+Tejada_J/0/1/0/all/0/1">Juan Tejada</a> at December 18, 2014 01:30 AM

Buffer Overflow Analysis for C. (arXiv:1412.5400v2 [cs.PL] UPDATED)

Buffer overflow detection and mitigation for C programs has been an important concern for a long time. This paper defines a string buffer overflow analysis for C programs. The key ideas of our formulation are (a) separating buffers from the pointers that point to them, (b) modelling buffers in terms of sizes and sets of positions of null characters, and (c) defining stateless functions to compute the sets of null positions and mappings between buffers and pointers.

This exercise has been carried out to test the feasibility of describing such an analysis in terms of lattice valued functions and relations to facilitate automatic construction of an analyser without the user having to write C/C++/Java code. This is facilitated by devising stateless formulations because stateful formulations combine features through side effects in states raising a natural requirement of C/C++/Java code to be written to describe them. Given the above motivation, the focus of this paper is not on showing good static approximations for buffer overflow analysis but to show how given static approximations could be formalized in terms of stateless formulations so that they become amenable to automatic construction of analysers.

by <a href="http://arxiv.org/find/cs/1/au:+Khedker_U/0/1/0/all/0/1">Uday P. Khedker</a> at December 18, 2014 01:30 AM

Representation of Evolutionary Algorithms in FPGA Cluster for Project of Large-Scale Networks. (arXiv:1412.5384v1 [cs.DC])

Many problems are related to network projects, such as electric distribution, telecommunication and others. Most of them can be represented by graphs, which manipulate thousands or millions of nodes, becoming almost an impossible task to obtain real-time solutions. Many efficient solutions use Evolutionary Algorithms (EA), where researches show that performance of EAs can be substantially raised by using an appropriate representation, such as the Node-Depth Encoding (NDE). The objective of this work was to partition an implementation on single-FPGA (Field-Programmable Gate Array) based on NDE from 512 nodes to a multi-FPGAs approach, expanding the system to 4096 nodes.

by <a href="http://arxiv.org/find/cs/1/au:+Perina_A/0/1/0/all/0/1">Andre B. Perina</a>, <a href="http://arxiv.org/find/cs/1/au:+Gois_M/0/1/0/all/0/1">Marcilyanne M. Gois</a>, <a href="http://arxiv.org/find/cs/1/au:+Matias_P/0/1/0/all/0/1">Paulo Matias</a>, <a href="http://arxiv.org/find/cs/1/au:+Cardoso_J/0/1/0/all/0/1">Joao M. P. Cardoso</a>, <a href="http://arxiv.org/find/cs/1/au:+Delbem_A/0/1/0/all/0/1">Alexandre C. B. Delbem</a>, <a href="http://arxiv.org/find/cs/1/au:+Bonato_V/0/1/0/all/0/1">Vanderlei Bonato</a> at December 18, 2014 01:30 AM

Maximal Correlation Secrecy. (arXiv:1412.5374v1 [cs.IT])

This paper shows that the Hirschfeld-Gebelein-R\'enyi maximal correlation between the message and the ciphertext provides good secrecy guarantees for ciphers that use short keys. We show that a maximal correlation $0< \rho < 1$ can be achieved via a randomly generated cipher with key length of around $2 \log(1/\rho)$ for small $\rho$, independent of the message length. It can also be achieved by a stream cipher with key length of $2\log(1/\rho) + \log n+2$ for a message of length $n$. We provide a converse result showing that the maximal correlations of these randomly generated ciphers are close to optimal. We then show that any cipher with a small maximal correlation achieves a variant of semantic security with computationally unbounded adversary. These results clearly demonstrate that maximal correlation is a stronger and more practically relevant measure of secrecy than mutual information.

by <a href="http://arxiv.org/find/cs/1/au:+Li_C/0/1/0/all/0/1">Cheuk Ting Li</a>, <a href="http://arxiv.org/find/cs/1/au:+Gamal_A/0/1/0/all/0/1">Abbas El Gamal</a> at December 18, 2014 01:30 AM

Spectrum and Energy Efficiency Evaluation of Two-Tier Femtocell networks With Partially Open Channels. (arXiv:1412.5372v1 [cs.NI])

Two-tier femtocell networks is an efficient communication architecture that significantly improves throughput in indoor environments with low power consumption. Traditionally, a femtocell network is usually configured to be either completely open or completely closed in that its channels are either made available to all users or used by its own users only. This may limit network flexibility and performance. It is desirable for owners of femtocell base stations if a femtocell can partially open its channels for external users access. In such scenarios, spectrum and energy efficiency becomes a critical issue in the design of femtocell network protocols and structure. In this paper, we conduct performance analysis for two-tier femtocell networks with partially open channels. In particular, we build a Markov chain to model the channel access in the femtocell network and then derive the performance metrics in terms of the blocking probabilities. Based on stationary state probabilities derived by Markov chain models, spectrum and energy efficiency are modeled and analyzed under different scenarios characterized by critical parameters, including number of femtocells in a macrocell, average number of users, and number of open channels in a femtocell. Numerical and Monte-Carlo (MC) simulation results indicate that the number of open channels in a femtocell has an adverse impact on the spectrum and energy efficiency of two-tier femtocell networks. Results in this paper provide guidelines for trading off spectrum and energy efficiency of two-tier femtocell networks by configuring different numbers of open channels in a femtocell.

by <a href="http://arxiv.org/find/cs/1/au:+Ge_X/0/1/0/all/0/1">Xiaohu Ge</a>, <a href="http://arxiv.org/find/cs/1/au:+Han_T/0/1/0/all/0/1">Tao Han</a>, <a href="http://arxiv.org/find/cs/1/au:+Zhang_Y/0/1/0/all/0/1">Yan Zhang</a>, <a href="http://arxiv.org/find/cs/1/au:+Mao_G/0/1/0/all/0/1">Guoqiang Mao</a>, <a href="http://arxiv.org/find/cs/1/au:+Wang_C/0/1/0/all/0/1">Cheng-Xiang. Wang</a>, <a href="http://arxiv.org/find/cs/1/au:+Zhang_J/0/1/0/all/0/1">Jing Zhang</a>, <a href="http://arxiv.org/find/cs/1/au:+Yang_B/0/1/0/all/0/1">Bin Yang</a>, <a href="http://arxiv.org/find/cs/1/au:+Pan_S/0/1/0/all/0/1">Sheng Pan</a> at December 18, 2014 01:30 AM

Capacity analysis of a multi-cell multi-antenna cooperative cellular network with co-channel interference. (arXiv:1412.5366v1 [cs.NI])

Characterization and modeling of co-channel interference is critical for the design and performance evaluation of realistic multi-cell cellular networks. In this paper, based on alpha stable processes, an analytical co-channel interference model is proposed for multi-cell multiple-input multi-output (MIMO) cellular networks. The impact of different channel parameters on the new interference model is analyzed numerically. Furthermore, the exact normalized downlink average capacity is derived for a multi-cell MIMO cellular network with co-channel interference. Moreover, the closed-form normalized downlink average capacity is derived for cell-edge users in the multi-cell multiple-input single-output (MISO) cooperative cellular network with co-channel interference. From the new co-channel interference model and capacity, the impact of cooperative antennas and base stations on cell-edge user performance in the multi-cell multi-antenna cellular network is investigated by numerical methods. Numerical results show that cooperative transmission can improve the capacity performance of multi-cell multi-antenna cooperative cellular networks, especially in a scenario with a high density of interfering base stations. The capacity performance gain is degraded with the increased number of cooperative antennas or base stations.

by <a href="http://arxiv.org/find/cs/1/au:+Ge_X/0/1/0/all/0/1">Xiaohu Ge</a>, <a href="http://arxiv.org/find/cs/1/au:+Huang_K/0/1/0/all/0/1">Kun Huang</a>, <a href="http://arxiv.org/find/cs/1/au:+Wang_C/0/1/0/all/0/1">Cheng-Xiang Wang</a>, <a href="http://arxiv.org/find/cs/1/au:+Hong_X/0/1/0/all/0/1">Xuemin Hong</a>, <a href="http://arxiv.org/find/cs/1/au:+Yang_X/0/1/0/all/0/1">Xi Yang</a> at December 18, 2014 01:30 AM

Energy Efficiency Evaluation of Cellular Networks Based on Spatial Distributions of Traffic Load and Power Consumption. (arXiv:1412.5356v1 [cs.NI])

Energy efficiency has gained its significance when service providers' operational costs burden with the rapidly growing data traffic demand in cellular networks. In this paper, we propose an energy efficiency model for Poisson-Voronoi tessellation (PVT) cellular networks considering spatial distributions of traffic load and power consumption. The spatial distributions of traffic load and power consumption are derived for a typical PVT cell, and can be directly extended to the whole PVT cellular network based on the Palm theory. Furthermore, the energy efficiency of PVT cellular networks is evaluated by taking into account traffic load characteristics, wireless channel effects and interference. Both numerical and Monte Carlo simulations are conducted to evaluate the performance of the energy efficiency model in PVT cellular networks. These simulation results demonstrate that there exist maximal limits for energy efficiency in PVT cellular networks for given wireless channel conditions and user intensity in a cell.

by <a href="http://arxiv.org/find/cs/1/au:+Xiang_L/0/1/0/all/0/1">Lin Xiang</a>, <a href="http://arxiv.org/find/cs/1/au:+Ge_X/0/1/0/all/0/1">Xiaohu Ge</a>, <a href="http://arxiv.org/find/cs/1/au:+Wang_C/0/1/0/all/0/1">Cheng-Xiang Wang</a>, <a href="http://arxiv.org/find/cs/1/au:+Li_F/0/1/0/all/0/1">Frank Y. Li</a>, <a href="http://arxiv.org/find/cs/1/au:+Reichert_F/0/1/0/all/0/1">Frank Reichert</a> at December 18, 2014 01:30 AM

Analysis of Two-Tier LTE Network with Randomized Resource Allocation and Proactive Offloading. (arXiv:1412.5340v1 [cs.IT])

The heterogeneity in cellular networks that comprise multiple base stations types imposes new challenges in network planning and deployment. The Radio Resource Management (RRM) techniques, such as dynamic sharing of the available resources and advanced user association strategies determine the overall network capacity and the network/spectrum efficiency. This paper evaluates the downlink performance of a two-tier heterogeneous LTE network (consisting of macro and femto tiers) in terms of rate distribution, i.e. the percentage of users that achieve certain rate in the system. The paper specifically addresses: (1) the femto tier RRM by randomization of the allocated resources; (2) the user association process by introducing novel proactive offloading scheme and (3) femto tier access control. System level simulation results show that an optimal RRM strategy can be designed for different scenarios (e.g. congested and uncongested networks). The proposed proactive offloading scheme in the association phase improves the performance of congested networks by efficiently utilizing the available femto tier resources. Finally, the introduced hybrid access in femto tier is shown to perform nearly identical as the open access.

by <a href="http://arxiv.org/find/cs/1/au:+Smiljkovikj_K/0/1/0/all/0/1">Katerina Smiljkovikj</a>, <a href="http://arxiv.org/find/cs/1/au:+Ichkov_A/0/1/0/all/0/1">Aleksandar Ichkov</a>, <a href="http://arxiv.org/find/cs/1/au:+Angjelicinoski_M/0/1/0/all/0/1">Marko Angjelicinoski</a>, <a href="http://arxiv.org/find/cs/1/au:+Atanasovski_V/0/1/0/all/0/1">Vladimir Atanasovski</a>, <a href="http://arxiv.org/find/cs/1/au:+Gavrilovska_L/0/1/0/all/0/1">Liljana Gavrilovska</a> at December 18, 2014 01:30 AM

Efficient XVA Management: Computation, Hedging, and Attribution using Trade-Level Regression and Global Conditioning. (arXiv:1412.5332v1 [q-fin.CP])

Banks must manage the lifetime costs of XVA, i.e. credit (CVA), funding (including funding of initial margins, FVA and MVA), capital (KVA) and tax (TVA). Management includes hedging (requiring first- and second-order sensitivities), attribution (and re-attribution), and incremental changes together with their interactions. Incremental management is required throughout the trading day for multiple portfolio changes. We show that a combination of trade-level regression and global conditioning (exploiting the linearity of conditional expectations over filtrations) radically simplifies both the computation and also the implementation of the computations. Moreover, many calculation elements are inherently parallel and suitable for GPU implementation.

by <a href="http://arxiv.org/find/q-fin/1/au:+Kenyon_C/0/1/0/all/0/1">Chris Kenyon</a>, <a href="http://arxiv.org/find/q-fin/1/au:+Green_A/0/1/0/all/0/1">Andrew Green</a> at December 18, 2014 01:30 AM

Reduction and Fixed Points of Boolean Networks and Linear Network Coding Solvability. (arXiv:1412.5310v1 [cs.IT])

Linear network coding transmits data through networks by letting the intermediate nodes combine the messages they receive and forward the combinations towards their destinations. The solvability problem asks whether the demands of all the destinations can be simultaneously satisfied by using linear network coding. The guessing number approach converts this problem to determining the number of fixed points of coding functions $f:A^n\to A^n$ over a finite alphabet $A$ (usually referred to as Boolean networks if $A = \{0,1\}$) with a given interaction graph, that describes which local functions depend on which variables. In this paper, we generalise the so-called reduction of coding functions in order to eliminate variables. We then determine the maximum number of fixed points of a fully reduced coding function, whose interaction graph has a loop on every vertex. Since the reduction preserves the number of fixed points, we then apply these ideas and results to obtain four main results on the linear network coding solvability problem. First, we prove that non-decreasing coding functions cannot solve any more instances than routing already does. Second, we show that triangle-free undirected graphs are linearly solvable if and only if they are solvable by routing. This is the first classification result for the linear network coding solvability problem. Third, we exhibit a new class of non-linearly solvable graphs. Fourth, we determine large classes of strictly linearly solvable graphs.

by <a href="http://arxiv.org/find/cs/1/au:+Gadouleau_M/0/1/0/all/0/1">Maximilien Gadouleau</a>, <a href="http://arxiv.org/find/cs/1/au:+Richard_A/0/1/0/all/0/1">Adrien Richard</a>, <a href="http://arxiv.org/find/cs/1/au:+Fanchon_E/0/1/0/all/0/1">Eric Fanchon</a> at December 18, 2014 01:30 AM

Graph Analytics using the Vertica Relational Database. (arXiv:1412.5263v1 [cs.DB])

Graph analytics is becoming increasingly popular, with a deluge of new systems for graph analytics having been proposed in the past few years. These systems often start from the assumption that a new storage or query processing system is needed, in spite of graph data being often collected and stored in a relational database in the first place. In this paper, we study Vertica relational database as a platform for graph analytics. We show that vertex-centric graph analysis can be translated to SQL queries, typically involving table scans and joins, and that modern column-oriented databases are very well suited to running such queries. Specifically, we present an experimental evaluation of the Vertica relational database system on a variety of graph analytics, including iterative analysis, a combination of graph and relational analyses, and more complex 1- hop neighborhood graph analytics, showing that it is competitive to two popular vertex-centric graph analytics systems, namely Giraph and GraphLab.

by <a href="http://arxiv.org/find/cs/1/au:+Jindal_A/0/1/0/all/0/1">Alekh Jindal</a>, <a href="http://arxiv.org/find/cs/1/au:+Madden_S/0/1/0/all/0/1">Samuel Madden</a>, <a href="http://arxiv.org/find/cs/1/au:+Castellanos_M/0/1/0/all/0/1">Malu Castellanos</a>, <a href="http://arxiv.org/find/cs/1/au:+Hsu_M/0/1/0/all/0/1">Meichun Hsu</a> at December 18, 2014 01:30 AM

How many queries are needed to distinguish a truncated random permutation from a random function?. (arXiv:1412.5204v1 [cs.CR])

An oracle chooses a function $f$ from the set of $n$ bits strings to itself, which is either a randomly chosen permutation or a randomly chosen function. When queried by an $n$-bit string $w$, the oracle computes $f(w)$, truncates the $m$ last bits, and returns only the first $n-m$ bits of $f(w)$. How many queries does a querying adversary need to submit in order to distinguish the truncated permutation from the (truncated) function?

In 1998, Hall et al. showed an algorithm for determining (with high probability) whether or not $f$ is a permutation, using $O(2^{\frac{m+n}{2}})$ queries. They also showed that if $m < n/7$, a smaller number of queries will not suffice. For $m > n/7$, their method gives a weaker bound. In this note, we first show how a modification of the approximation method used by Hall et al. can solve the problem completely. It extends the result to practically any $m$, showing that $\Omega(2^{\frac{m+n}{2}})$ queries are needed to get a non-negligible distinguishing advantage. However, more surprisingly, a better bound for the distinguishing advantage can be obtained from a result of Stam published, in a different context, already in 1978. We also show that, at least in some cases, Stam's bound is tight.

by <a href="http://arxiv.org/find/cs/1/au:+Gilboa_S/0/1/0/all/0/1">Shoni Gilboa</a>, <a href="http://arxiv.org/find/cs/1/au:+Gueron_S/0/1/0/all/0/1">Shay Gueron</a>, <a href="http://arxiv.org/find/cs/1/au:+Morris_B/0/1/0/all/0/1">Ben Morris</a> at December 18, 2014 01:30 AM

Embedding in $q$-ary $1$-perfect codes and partitions. (arXiv:1412.3795v2 [math.CO] CROSS LISTED)

We prove that every $1$-error-correcting code over a finite field can be embedded in a $1$-perfect code of some larger length. Embedding in this context means that the original code is a subcode of the resulting $1$-perfect code and can be obtained from it by repeated shortening. Further, we generalize the results to partitions: every partition of the Hamming space into $1$-error-correcting codes can be embedded in a partition of a space of some larger dimension into $1$-perfect codes. For the partitions, the embedding length is close to the theoretical bound for the general case and optimal for the binary case. Keywords: error-correcting code, $1$-perfect code, $1$-perfect partition, embedding

by <a href="http://arxiv.org/find/math/1/au:+Krotov_D/0/1/0/all/0/1">Denis S. Krotov</a>, <a href="http://arxiv.org/find/math/1/au:+Sotnikova_E/0/1/0/all/0/1">Evgeniya V. Sotnikova</a> at December 18, 2014 01:30 AM

The complexity of interior point methods for solving discounted turn-based stochastic games. (arXiv:1304.1888v2 [cs.GT] UPDATED)

We study the problem of solving discounted, two player, turn based, stochastic games (2TBSGs). Jurdzinski and Savani showed that 2TBSGs with deterministic transitions can be reduced to solving $P$-matrix linear complementarity problems (LCPs). We show that the same reduction works for general 2TBSGs. This implies that a number of interior point methods for solving $P$-matrix LCPs can be used to solve 2TBSGs. We consider two such algorithms. First, we consider the unified interior point method of Kojima, Megiddo, Noma, and Yoshise, which runs in time $O((1+\kappa)n^{3.5}L)$, where $\kappa$ is a parameter that depends on the $n \times n$ matrix $M$ defining the LCP, and $L$ is the number of bits in the representation of $M$. Second, we consider the interior point potential reduction algorithm of Kojima, Megiddo, and Ye, which runs in time $O(\frac{-\delta}{\theta}n^4\log \epsilon^{-1})$, where $\delta$ and $\theta$ are parameters that depend on $M$, and $\epsilon$ describes the quality of the solution. For 2TBSGs with $n$ states and discount factor $\gamma$ we prove that in the worst case $\kappa = \Theta(n/(1-\gamma)^2)$, $-\delta = \Theta(\sqrt{n}/(1-\gamma))$, and $1/\theta = \Theta(n/(1-\gamma)^2)$. The lower bounds for $\kappa$, $-\delta$, and $1/\theta$ are obtained using the same family of deterministic games.

by <a href="http://arxiv.org/find/cs/1/au:+Hansen_T/0/1/0/all/0/1">Thomas Dueholm Hansen</a>, <a href="http://arxiv.org/find/cs/1/au:+Ibsen_Jensen_R/0/1/0/all/0/1">Rasmus Ibsen-Jensen</a> at December 18, 2014 01:30 AM

/r/compsci

What is the difference between a scalar value and a constant expression?

Hi! I have a final tomorrow and I am a little confused so I was hoping you guys could explain something to me.

We were learning about implementing parameter-passing methods and the professor wrote down that if you pass the parameter by name you get these resulting semantics: 1. If the actual parameter is a scalar value --> pass by reference 2. If the actual parameter is a constant expression --> pass by value 3. If the actual parameter is array index (element) --> bound by text

I have looked around google a lot but I am still having a hard time understanding what a scalar value is and what a constant expression is. If anyone could explain what those are in this context I would be ever so grateful.

Thanks so much.

submitted by CluelessAgain
[link] [25 comments]

December 18, 2014 01:27 AM

CompsciOverflow

Knapsack problem is NP-complete - Exact cover

Show that the knapsack problem (Given a sequence of integers $S=i_1, i_2, \dots , i_n$ and an integer $k$, is there a subsequence of $S$ that sums to exactly $k$?) is NP-complete.

Hint:Use the exact cover problem.

The exact cover problem is the following: Given a family of sets $S_1, S_2, \dots , S_n$ does there exist a set cover consisting of a subfamily of pairwise disjoint sets?

First of all, to show that this problem is in $\mathcal{NP}$ do we have to do the following??

A nondeterministic Turing machine can first guess which the subsequence of that we are looking for is and then verify that it sums to exactly k in linear time.

Is this correct??

To show that it is NP complete how could we reduce the exact cover problem to the knapsack problem??

Could you give me some hints??

EDIT:

Is it as followed??

The exact cover problem has a solution iff every element is in exactly one set.

We consider the set $S$ and the number $k$ such that each number corresponds to a set of elements and $k$ corresponds to the whole set. Suppose there are $n$ elements and $k$ different sets.

We replace each set S with a number that is $1$ in its ith position if i is in S and has a $0$ in its ith position otherwise.

We set k to a number that is $n$ copies of the number $1$.

by Mary Star at December 18, 2014 01:24 AM

QuantOverflow

Median value for geometric brownian motion simulation

I'm trying to simulate stock prices using GBM. I am using the following formula, and MATLAB function, to determine the stock prices:

$\nu = \mu - \frac{\sigma^{2}}{2}$;

$S = S0*\text{[ones(1,nsims); ... cumprod(}\exp(\nu dt+\sigma \sqrt{dt}*\text{randn(steps,nsims))},1)];$

using the following parameters:

$S0 = 1,$ $\mu = 0,$ $\sigma = 0.2481,$ $dt = 1/365,$ $\text{steps} = 365,$ $\text{nsims} = 1000.$

When I use this to generate the stock prices the results look log-normal and the log of the returns from the first to last price is also normal.

The issue I am having is that with no drift the median should be 1 according to http://en.wikipedia.org/wiki/Log-normal_distribution#Mode_and_median, but I am consistently getting values less than 1.

I am not sure what is going on or if I am simulating incorrectly.

This is my first time posting so please let me know if I have done anything incorrectly.

Thank you.

by Robert at December 18, 2014 01:23 AM

infra-talk

Fixing NFS on SmartOS after rebooting

For some undetermined reason, NFS fails when my SmartOS NAS is rebooted. statd fails to start and nlockmgr also fails.

These commands seem to fix it:

svcadm restart svc:/network/rpc/bind:default
svcadm clear svc:/network/nfs/status:default
sharectl set -p lockd_servers=80 nfs
svcadm clear svc:/network/nfs/nlockmgr:default

by Robin Bowes at December 18, 2014 01:21 AM

/r/emacs

Automate reverting to last know good config on startup error?

I use emacs for everything nowadays, so it's super annoying when I start it up and discover that 99% of my settings didn't load because of an error on the third line of my .emacs, and now I have to go find and debug it using the default settings.

So what I'd like to do is have emacs automagically revert to a "last known good" configuration if there is an error at startup. I already track my config in git and have some ideas about how to implement this but I don't want to reinvent the wheel. Has anybody else tackled this?

submitted by tending
[link] [4 comments]

December 18, 2014 01:13 AM

StackOverflow

Collect logs from clients of flavours of system

I am trying to get logs securely. There are machines in public domain from where I need to get logs ( windows / linux flavors) and I intend to make use of a framework like Flume / FluentD.

Any idea if this is doable in secure way ?

thanks in advance.

by user3044440 at December 18, 2014 01:00 AM

Optimizing span creation in SpannableString

I'm trying to create a styled EditText widget. Pressing ctrl-b, ctrl-i or ctrl-u toggles bold, italic or underlining, and any entered text is styled accordingly.

I'm using an InputFilter to set the correct style on entered text. See the Scala code below. It works in that each character I input has the correct style, but fails in that it creates an entirely new span for each character, even if the styles are identical. Later in my code I'm converting the SpannableString into a styled Word document, and the fact that each character is uniquely styled creates highly inefficient documents.

The most ideal situation would be for entering bolded text to determine whether it is within, before or after another bolded region, or if no region exists. If within, no action is taken. If before or after, the span is deleted and recreated with an area large enough to account for the new character. If no region exists, a new one is created.

Unfortunately, an InputFilter is not allowed to modify its source. My next idea was to break document modification into two passes, one which inserts the new text/span as I'm doing now, and a TextWatcher that iterates over spans and identifies adjacent/identical ones. But this seems messy and highly suboptimal, particularly since it looks like the only method that can modify text on the TextWatcher is called without any kind of offset information.

I feel like I'm missing something. Is there no SpanOptimizer class or method I can run over a SpannableString that identifies adjacent/overlapping identical spans and combines them into the smallest group? Is there some smart way to create spans that eliminates this problem? I thought about perhaps tweaking the span creation flags, but I want to dynamically grow spans when new text is added, not make one expand infinitely.

Here's my Scala InputFilter code:

  def filter(source:CharSequence, start:Int, end:Int, dest:Spanned, dstart:Int, dend:Int) = {
    val rv = new SpannableString(source)
def     doFormat(check: () => Boolean, typeface:Option[Int] = None, span:Option[_] = None) {
      if(check()) {
        //Log.d("editorcheck", "Filter: source = "+source+", start = "+start+", end = "+end+",     dest = "+dest+", dstart = "+dstart+", dend = "+dend)
        if(source.length != 0) {
          typeface.foreach { tf =>
            rv.setSpan(new style.StyleSpan(tf), start, end, Spanned.SPAN_EXCLUSIVE_EXCLUSIVE)
          }
          span.foreach { s =>
            rv.setSpan(s, start, end, Spanned.SPAN_EXCLUSIVE_EXCLUSIVE)
          }
        }
      } else {
        if(source.length != 0) {
          /*Nil.find(_.getStyle == Typeface.BOLD).foreach { span =>
            null
          }*/
        }
      }
    }
    doFormat(() => bold && italic, typeface = Some(Typeface.BOLD_ITALIC))
    doFormat(() => bold && !italic, typeface = Some(Typeface.BOLD))
    doFormat(() => italic && !bold, typeface = Some(Typeface.ITALIC))
    doFormat(() => underline, span = Some(new style.UnderlineSpan()))
    rv
  }

Thanks.

by Nolan at December 18, 2014 12:52 AM

AWS

Resource Groups and Tagging for AWS

For many years, AWS customers have used tags to organize their EC2 resources (instances, images, load balancers, security groups, and so forth), RDS resources (DB instances, option groups, and more), VPC resources (gateways, option sets, network ACLS, subnets, and the like) Route 53 health checks, and S3 buckets. Tags are used to label, collect, and organize resources and become increasingly important as you use AWS in larger and more sophisticated ways. For example, you can tag relevant resources and then take advantage AWS Cost Allocation for Customer Bills.

Today we are making tags even more useful with the introduction of a pair of new features: Resource Groups and a Tag Editor. Resource Groups allow you to easily create, maintain, and view a collection of resources that share common tags. The new Tag Editor allows you to easily manage tags across services and Regions. You can search globally and edit tags in bulk, all with a couple of clicks.

Let's take a closer look at both of these cool new features! Both of them can be accessed from the new AWS menu:

Tag Editor
Until today, when you decided to start making use of tags, you were faced with the task of stepping through your AWS resources on a service-by-service, region-by-region basis and applying tags as needed. The new Tag Editor centralizes and streamlines this process.

Let's say I want to find and then tag all of my EC2 resources. The first step is to open up the Tag Editor and search for them:

The Tag Editor searches my account for the desired resource types across all of the selected Regions and then displays all of the matches:

I can then select all or some of the resources for editing. When I click on the Edit tags for selected button, I can see and edit existing tags and add new ones. I can also see existing System tags:

I can see which values are in use for a particular tag by simply hovering over the Multiple values indicator:

I can change multiple tags simultaneously (changes take effect when I click on Apply changes):

Resource Groups
A Resource Group is a collection of resources that shares one or more tags. It can span Regions and services and can be used to create what is, in effect, a custom console that organizes and consolidates the information you need on a per-project basis.

You can create a new Resource Group with a couple of clicks. I tagged a bunch of my AWS resources with Service and then added the EC2 instances, DB instances, and S3 buckets to a new Resource Group:

My Resource Groups are available from within the AWS menu:

Selecting a group displays information about the resources in the group, including any alarm conditions (as appropriate):

This information can be further expanded:

Each identity within an AWS account can have its own set of Resource Groups. They can be shared between identities by clicking on the Share icon:

Down the Road
We are, as usual, very interested in your feedback on this feature and would love to hear from you! To get in touch, simply open up the Resource Groups Console and click on the Feedback button.

Available Now
Resource Groups and the Tag Editor are available now and you can start using them today!

-- Jeff;

by Jeff Barr (awseditor@amazon.com) at December 18, 2014 12:31 AM

TheoryOverflow

Time complexity of a branching-and-bound algorithm

Theoretical computer scientists usually use branch-and-reduce algorithms to find exact solutions. The time complexity of such a branching algorithm is usually analyzed by the method of branching vector, and recently developed techniques such as measure-and-conquer may help us to obtain a better bound. While branch-and-bound algorithms are usually used in practice and seem more efficient (in my experience), I find no result of analyzing the worst-case time complexity of a branch-and-bound algorithm. Does anyone know such an example?

by Bangye at December 18, 2014 12:19 AM

StackOverflow

How do you cope with emacs halting on receiving big input?

I am developing project in clojure using emacs cider under windows. And sometimes I have a problem that after accidently forgotten println function or on printing contents of big file Emacs stops responding (cursor and all key combinations doesn't work) and retire into oneself for processing that information to show it in repl. The only way to continue I know is to close program and open project files from scratch. And it is so simple to get in this trap.

Are there any other better solutions or configuration restrictions?

by user2244092 at December 18, 2014 12:15 AM

How to save Scala List object in session for Scala Play framework

I am new to Scala and Play framework and working on first web application using Play Framework.

And I am looking for, How to save a Scala List object into session object? I see request.session has method for adding key-value but available for both String values. But my requirement is to add and list Object to session, so that I can access anywhere of the application.

Please help out with sample code here.

by Kiran Nunna at December 18, 2014 12:09 AM

DataTau

Planet Clojure

Couple of DataScript resources

There’s couple of new resources available about DataScript.

On December 4th I gave a talk at Clojure eXchange conference about motivation behind DataScript, a little bit about internals, and then about how DataScript can be used for application development. Beyond traditional SPAs, there were couple of examples of new kind of architectures that are trivial to execute given that DataScript exists.

You can watch video of the talk at SkillsMatter website (free, registration required) and check out slides:

Later this month I talked at ClojureScript NYC user group. During the webinar we developed ToDo application from scratch and touched, at least quickly, almost every aspect of DataScript. Here’s the agenda:

  • Create DB schema (multi-valued relations, references)
  • Add ability to create tasks (basic transact!)
  • Display list of tasks (basic query)
  • Display tags on tasks (multi-valued attrs)
  • Persist database to localStorage (serialization/deserialization)
  • Make tasks completable (transact functions)
  • Assign projects to tasks (entity navigation)
  • Display task count for projects (aggregate queries)
  • Display task count for inbox (“negate” query, query functions, query predicates)
  • Display “by month” grouping (custom fn call in a query)
  • Make left panel navigable (storing “view” app state in a db)
  • Add filter (implicit OR via rules and collection bindings)

The recording:

After the webinar I fixed couple of bugs in ToDo repo (and in DataScript as well), added comments here and there explaining what’s going on and implemented couple of new features:

  • DB filtering
  • Serialization via transit-cljs
  • History tracking and undo/redo support

DataScript-ToDo should be a good resource for learning DataScript and its applications in the wild. Source code is on github, live version here:

Stay tuned!

by Nikita Prokopov at December 18, 2014 12:00 AM

Planet Clojure

Creating an interpose transducer

Continuing from yesterday’s post, I wanted to cover the interpose transducer from CLJ-1601.

The sequence version of interpose is a straightforward combination of other existing sequence functions (repeat, interleave, and drop). It’s implemented like this:

(defn interpose [sep coll]
 	(drop 1 (interleave (repeat sep) coll)))

Walking inside out, (repeat seq) will create an infinite sequence of separator strings. (interleave (repeat sep) coll) will interleave the infinite separate seq and the collection (possibly also an infinite sequence!) like this:

sep elem0 sep elem1 sep elem2

And finally the (drop 1) loses the first separator which is unnecessary.

In the transducer version, I chose to use a volatile to store a flag about whether this was the first input element. In the case of the first input element, we simply update the flag and invoke the inner reducing function on the first element. This effectively does the “drop 1” behavior of the sequence implementation. Forever after, we invoke the reducing function on both the separator and then on the element:

(defn interpose
  ([sep]
   (fn [rf]
     (let [started (volatile! false)]
       (fn
         ([] (rf))
         ([result] (rf result))
         ([result input]
          (if @started
            (let [sepr (rf result sep)]
              (if (reduced? sepr)
                sepr
                (rf sepr input)))
            (do
              (vreset! started true)
              (rf result input))))))))

As with distinct, the started flag is a volatile created once per transducing process and the real business happens in the reducing arity.

One issue that we need to deal with is being aware of reduced values. The calls to rf on the input are fine - they may return a reduced value that will be dealt with at a higher level (ultimately the transducing process itself). The special case is when a reduced value is returned from the separator. An example where this could happen would be:

(into [] (comp (interpose :x) (take 4)) (range 3))

The (range 3) produces sequence (0 1 2). The interpose should produce (0 :x 1 :x 2). The take should then grab just (0 :x 1 :x) and the reduced wrapper will be sent on a separator (:x).

So in the transducer code, we need to check if we’ve already encountered a reduced value when we invoke rf on the sepr, and if so stop and return the reduced value without invoking on the next input.

That’s it! Quick performance comparison:

expr time
(into [] (interpose nil v)) 316.0 µs
(into [] (interpose nil) v) 35.5 µs

This code has not yet been screened or added to 1.7, but I expect that it will be.

by Inside Clojure at December 18, 2014 12:00 AM

HN Daily

Planet Theory

On Google Scholar H-Index Manipulation by Merging Articles

Authors: René van Bevern, Christian Komusiewicz, Rolf Niedermeier, Manuel Sorge, Toby Walsh
Download: PDF
Abstract: Google Scholar allows merging multiple article versions into one. This merging affects the H-index computed by Google Scholar. We analyze the parameterized complexity of maximizing the H-index using article merges. Herein, multiple possible measures for computing the citation count of a merged article are considered. Among others, for the measure used by Google Scholar, we give an algorithm that maximizes the H-index in linear time if there is only a constant number of versions of the same article. In contrast, if we are allowed to merge arbitrary articles, then already increasing the H-index by one is NP-hard.

December 18, 2014 12:00 AM

A Self-Tester for Linear Functions over the Integers with an Elementary Proof of Correctness

Authors: Sheela Devadas, Ronitt Rubinfeld
Download: PDF
Abstract: We present simple, self-contained proofs of correctness for algorithms for linearity testing and program checking of linear functions on finite subsets of integers represented as n-bit numbers. In addition we explore two generalizations of self-testing to multiple variables - the case of multilinear functions and homomorphisms on a multidimensional vector space.

We show that our self-testing algorithm for the univariate case can be directly generalized to vector space domains. The number of queries made by our algorithms are independent of domain size. However, linearity testing for multilinear functions requires a different testing algorithm. We give an algorithm for the k-linearity problem with queries independent of the size of the domain.

December 18, 2014 12:00 AM

December 17, 2014

CompsciOverflow

Is order of bits in byte really not of concern?

What I can't wrap my head around is sentence repeated everywhere I look, that order of bits in byte is not important(not of my, as a programmer, concern). My question then is if there is possibility that it makes difference?

For example, I crate a binary file with just 0x1 in it (represented on my machine as 00000001). What keeps other machine to read the same byte as 128(10000000) ? Is there standard for msb placement in file, memory that guarantees compability or am I missing something trivial/obvious along?

EDIT: Thanks to dirk5959's answer I found out that my machine is little-endian for bytes and the same is for bits in byte. Additional question is, if it is a rule or there is some architecture that behaves different?

by zubergu at December 17, 2014 11:42 PM

Lobsters

/r/netsec

Lobsters

CompsciOverflow

Prove Single-Tape and Non-write Turing Machine can Only Recognize Regular Language?

Here is the problem:

Prove the single-tape TM that cannot write on the portion of the tape containing the input string recognize only regular language.

My idea is to prove that this particular TM is equivalent to DFA.

Using this TM to simulate DFA is very straightforward.

However, when I want to use this DFA to simulate TM, I encounter the problem. For the TM transition $\delta(q,a)=(q',a,R)$, DFA can simulate definitely by reading tape to the right and doing the same state transition.

For $\delta(q,a)=(q',a,L)$, I cannot figure out how to use this DFA or NFA to simulate the left move because the DFA only reads to left and has no stack or something to store.

Should I consider another way? Could anyone give me some hints? Thanks.

by user3273554 at December 17, 2014 11:39 PM

StackOverflow

Parsing RDF items

I have a couple lines of (I think) RDF data

<http://www.test.com/meta#0001> <http://www.w3.org/1999/02/22-rdf-syntax-ns#type> <http://www.w3.org/2002/07/owl#Class> 
<http://www.test.com/meta#0002> <http://www.test.com/meta#CONCEPT_hasType> "BEAR"^^<http://www.w3.org/2001/XMLSchema#string>

Each line has 3 items in it. I want to pull out the item before and after the URL. So that would result in:

0001, type, Class
0002, CONCEPT_hasType, (BEAR, string)

Is there a library out there (java or scala) that would do this split for me? Or do I just need to shove string.splits and assumptions in my code?

by MintyAnt at December 17, 2014 11:37 PM

CompsciOverflow

Understanding Instruction Cycle?

A basic instruction cycle consists of these 5 stages.

Instruction Cycle

  1. IF - Instruction Fetch
  2. RD - Instruction Decode and Register Read
  3. EX - Execute
  4. MA - Memory Access
  5. WB - Write Back

I understood the meaning of all the stages accept 4th MA (Memory access), whats the significance of this stage.

by Atinesh at December 17, 2014 11:28 PM

Do programmers often use SDLC Methodologies for making a System? [migrated]

My instructor told us that some programmers are not using SDLC Methodologies for making a system. Because in our project, we used one of the methodologies. If we will not use one of them ......may be our system may not be function-able.

by Miramiel at December 17, 2014 11:24 PM

TheoryOverflow

Low-degree testing in PCP Theorem using bivariate polynomials

I read about modifications of the low-degree test used in the (first) proof of the PCP theorem. The test used in the proof works over randomly chosen lines while modifications allow choosing random planes (or affine subspaces in general). Is it possible to use these modifications in the framework (low-degree extensions and sum-checking) of the proof of the PCP theorem? Does this involve major changes?

EDIT: In the paper [1] by Raz and Safra it is stated that the scheme of the old proof is insufficient when one wants to achieve a sub-constant error. So what if one is satisfied by the "old" constant error? Does the (Hyper-)Plane-Point Test work for that? I have seen proofs using this test to show $NP \subseteq PCP[log(n), polylog(n)]$ but I have never seen the final argument for $NP = PCP[log(n), 1]$ via proof composition.

[1] A Sub-Constant Error-Probability Low-Degree Test, and a Sub-Constant Error-Probability PCP Characterization of NP

by Schnatzi at December 17, 2014 11:15 PM

QuantOverflow

How to annualise the volatility of non-iid returns?

I have a series of monthly log-returns; let's assume the log-returns are normally distributed, but exhibit significant serial correlation.

In the case of normal, i.i.d. returns, I can annualize the the log-returns by multiplying by a factor 12, and annualise the volatility by a factor of sqrt(12).

Given the dependence in my returns, how do I correctly scale to annual results?

by Smackboyg at December 17, 2014 11:13 PM

Planet Clojure

Validateur 2.4.2 is released

TL;DR

Validateur is a functional validations library inspired by Ruby’s ActiveModel. Validateur 2.4 is a minor feature release.

Changes Between 2.3.0 and 2.4.0

Clojure 1.4 Support Dropped

The project no longer tries to maintain Clojure 1.4 compatibility.

validate-some

validate-some tries any number of validators, short-circuiting at the first failed validator. This behavior is similar to or.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
(require '[validateur.validation :refer :all])

(let [v (validate-some
         (presence-of :cake-count :message "missing_cake")
         (validate-by :cake-count odd? :message "even_cake"))]

  "Odd cake counts are valid."
  (v {:cake-count 1})
  ;;=> [true #{}]


  "Even cake counts only throw the second error, since the first
  validation passed."
  (v {:cake-count 2})
  ;;=> [false {:cake-count #{"even_cake"}}]

  "The second validation never gets called and never throws a NPE, as
  it would if we just composed them up."
  (v {})
  ;;=> [false {:cake-count #{"missing_cake"}}]
  )

Contributed by Sam Ritchie (PaddleGuru).

errors? and errors

Errors in validateur are vectors if keys are nested. If keys are only one layer deep – :cake, for example – the error can live at :cake or [:cake].

The errors function returns the set of errors for some key, nested or bare. :cake will return errors stored under [:cake] and vice-versa.

errors? is a boolean wrapper that returns true if some key has errors, false otherwise.

Contributed by Sam Ritchie (PaddleGuru).

Full Change Log

Validateur change log is available on GitHub.

Validateur is a ClojureWerkz Project

Validateur is part of the group of libraries known as ClojureWerkz, together with

  • Langohr, a Clojure client for RabbitMQ that embraces the AMQP 0.9.1 model
  • Monger, a Clojure MongoDB client for a more civilized age
  • Elastisch, a minimalistic Clojure client for ElasticSearch
  • Cassaforte, a Clojure Cassandra client built around CQL
  • Neocons, a client for the Neo4J REST API
  • Welle, a Riak client with batteries included
  • Quartzite, a powerful scheduling library

and several others. If you like Validateur, you may also like our other projects.

Let us know what you think on Twitter or on the Clojure mailing list.

About The Author

Michael on behalf of the ClojureWerkz Team

by The ClojureWerkz Team at December 17, 2014 11:01 PM

Fefe

John Pilger will einen neuen Film machen und per Crowdfunding ...

John Pilger will einen neuen Film machen und per Crowdfunding produzieren. Der Film heißt "The Coming War" und geht um den Konflikt zwischen den USA und China. Der Link geht zum Werbe-Trailer für das Crowdfunding. Der Trailer ist im Wesentlichen ein Zusammenschnitt aus Highlights von Pilgers bisheriger Arbeit. Das alleine ist schon beeindruckend, falls jemand Pilger nicht kennt. Ein paar Details gibt es auf der Crowdfunding-Site.

Ich denke es ist an der Zeit, pro Monat 50 Euro oder so zurückzulegen und damit systematisch Projekte wie dieses hier zu fördern. Ich bin ja eigentlich kein Freund von Crowdfunding, weil ich finde, dass solche Filme mit meinen GEZ-Gebühren finanziert werden sollten. Aber das findet ja offensichtlich nicht statt.

Update: Wer sich mal ein Bild machen will, wie so ein John-Pilger-Film aussieht, kann sich hier den Film "The War You Don't See" über den Irakkrieg angucken.

December 17, 2014 11:00 PM

Auf diesen Pegida-Demos geht es zu wie auf NPD-Demos. ...

Auf diesen Pegida-Demos geht es zu wie auf NPD-Demos. Zumindest von der einem entgegen schlagenden Menschenverachtung her. Ich will jetzt nicht den "Stern" als Beispiel für tollen Journalismus herhalten, aber so behandelt man Menschen nicht, ob man sie für Journalisten feindlich gesinnter Medien hält oder nicht.

December 17, 2014 11:00 PM

StackOverflow

How to connect to a remote MySQL database via SSL using Play Framework?

I deploy Play applications in distributed environments, backed by a remote MySQL database. Specifically, the applications are hosted on heroku, and the database is on Amazon RDS (though this really applies to any remote database connection). Since the database isn't just on localhost, I'd prefer that the remote MySQL connection is made through SSL for security.

Given a CA certificate to trust, how can I configure a Play application to connect to the MySQL server through SSL, only if the host certificate can be verified?

Assume this as the current database configuration:

db.default.driver=com.mysql.jdbc.Driver
db.default.url="jdbc:mysql://url.to.database/test_db"
db.default.user=root 
db.default.password="...."

by m-z at December 17, 2014 10:57 PM

QuantOverflow

is Sum of P&L equal to portfolio value

For a portfolio containing FX options, would the sum of P&L for each option be the portfolio value?

by user13524 at December 17, 2014 10:52 PM

StackOverflow

Enable Statistics (stats) for Scala Tests in Intellij

I want to enable statistics (stats) for Scala Tests in Intellij.

According to this ScalaTest link there is a way to turn on stats for tests, but the default configuration is nostats (to not show them):

The link suggests running this stats command; is there an equivalent way to run the tests in Intellij?

scala> stats.run(new ArithmeticSuite)

Here is an example of the sample stats I would like to see:

Run completed in 386 milliseconds.
Total number of tests run: 2
Suites: completed 1, aborted 0
Tests: succeeded 1, failed 1, ignored 1, pending 1

Perhaps there is a VM parameter or Test option I can set in the configuration?

I am using Intellij 13.4.1 with the Scala Plugin, Scala 2.10, Scalatra 2.3.0, Maven 3.2.3, Java 1.8, Specs2, Windows 7

by satoukum at December 17, 2014 10:34 PM

Lobsters

Tagged memory and minion cores in the lowRISC SoC

As background, the lowRISC project aims to produce a fully open-source SoC using the RISC-V instruction set architecture and bring it to volume production. We’ve just released this document to describe in more detail our plans for tagged memory (not exactly the rebirth of the LISP machine, but has uses for security, debug/performance analysis, and many more) and minion cores (use to implement I/O peripherals in software, or for isolated secure execution, or …). I’d really welcome your thoughts.

Comments

by asb at December 17, 2014 10:31 PM

CompsciOverflow

"For small values of n, O(n) can be treated as if it's O(1)"

I've heard several times that for sufficiently small values of n, O(n) can be thought about/treated as if it's O(1).

Example:

The motivation for doing so is based on the incorrect idea that O(1) is always better than O(lg n), is always better than O(n). The asymptotic order of an operation is only relevant if under realistic conditions the size of the problem actually becomes large. If n stays small then every problem is O(1)!

What is sufficiently small? 10? 100? 1,000? At what point do you say "we can't treat this like a free operation anymore"? Is there a rule of thumb?

This seems like it could be domain- or case-specific, but are there any general rules of thumb about how to think about this?

by rianjs at December 17, 2014 10:09 PM

StackOverflow

Get rid of extra test during initialization of loop/recursion? [migrated]

I'm reluctant to ask this question. My code below works, it's intelligible, and it seems reasonably efficient. It's just that there's a trivial, nitpicky issue that's driving me crazy. The function maxes below collects all elements of a sequence that are "maximum" according to some criterion.

;; Example of use of maxes
;; Collect all maps with the maximum value for `:a`:
(maxes :a [{:a 1 :b 2} {:a 4 :b 5} {:a 5 :b 5} {:b 3 :a 5}])
;;=> [{:b 5, :a 5} {:b 3, :a 5}]

(defn- maxes-helper
  "Helper function for maxes."
  [f s best-val collected]
  (if (empty? s)
    collected
    (let [new-elt (first s)
          new-val (f new-elt)]
      (cond (== new-val best-val) (recur f (rest s) best-val (conj collected new-elt))
            (>  new-val best-val) (recur f (rest s) new-val  [new-elt])
            :else                 (recur f (rest s) best-val collected)))))

(defn maxes
  "Returns a sequence of elements from s, each with the maximum value of (f element)."
  [f s]
  (if (empty? s)
    nil
    (let [new-elt (first s)
          new-val (f new-elt)]
      (maxes-helper f (rest s) new-val [new-elt]))))

What's really bugging me that I have to test for emptiness of the input collection s both in the top-level function and in the tail-recursive helper function. I also want to add a custom exception when (f new-elt) is non-numeric, and so I need to test that twice, both for new-val and for first call to (f new-elt) that becomes the first best. So I've got a trivial amount of duplication of code, and I keep thinking that there must be a way to get rid of the duplication. But I can't see how to do this (without testing that best is numeric on every iteration). Am I missing something obvious (or non-obvious)?

(BTW I wrote another version using reduce, but it's about twice as slow, according to Criterium.)

by Mars at December 17, 2014 09:55 PM

Can reduceBykey be used to change type and combine values - Scala Spark?

In code below I'm attempting to combine values :

val rdd: org.apache.spark.rdd.RDD[((String), Double)] =
    sc.parallelize(List(
      (("a"), 1.0),
      (("a"), 3.0),
      (("a"), 2.0)
      ))

  val reduceByKey = rdd.reduceByKey((a , b) => String.valueOf(a) + String.valueOf(b))

reduceByValue should contain (a , 1,3,2) but receive compile time error :

Multiple markers at this line - type mismatch; found : String required: Double - type mismatch; found : String 
 required: Double

What determines the type of the reduce function ? Can the type not be converted ?

I could use groupByKey to achieve same result but just want to understand reduceByKey.

by blue-sky at December 17, 2014 09:53 PM

QuantOverflow

Historical Data on $/yen forward exchange rates

Would anyone happen to know where I can find historical forward exchange rate data between the yen and dollar?

by Zslice at December 17, 2014 09:52 PM

StackOverflow

Scalding: Trouble reading avro file with nested structure

I need to read in an Avro file in Scalding but have no idea how to work with it. I have worked with straightforward avro files but this one is a little more complicated. The schema looks like this:

{"type":"record",
 "name":"features",
 "namespace":"OurCode",
 "fields":[{"name":"key","type":"long"},
       {"name":"features",
        "type":{"type":"map","values":"double"}}]
}

Not sure how to read this data when the second "field" is a nested field that contains multiple fields inside of it and when each record contains a potentially different set of nested fields.

I initially tried to read it in using UnpackAvroSource and wrote to a Tsv, but I ended up with data that looked like:

key1   {var1=4, var2 = 3, var4 = 10}
key2   {var3 = 15, var4 = 9, var5 = 22}

Also tried creating a case class:

case class FileType(var key:Long, var features:Map[String,Double])

and then tried to read it in with:

PackedAvroSource[FileType](args("input"))

I got an error that says: could not find implicit value for evidence parameter of type com.twitter.scalding.avro.AvroSchemaType[FileReader.this.FileType], whereFileReader is the name of the class where the data is being read in.

Ultimately, I need to turn the above data into something that looks like:

             Var1   Var2   Var3   Var4   Var5
Key1           1      3     0      10     0
Key2           0      0     15      9     22

So if there is a better way to do that then that would work too.

Not very experienced with scalding or avro files so any help here is appreciated. Let me know what other info I might need to provide.

Thanks.

by J Calbreath at December 17, 2014 09:49 PM

Generating an XML document based on a hierarchy structure

My database table looks like:

Category
  id
  parent_id
  path_url
  name
  sort_order

So a row may look like:

1  NULL  /cars       cars    1
2  1     /cars/honda honda   2

I have all the categories loaded in a List:

   val categories: List[Category] = .....

Now I want to generate an XML representation of this hierarchial structure like:

<root>
  <category id="1" path_url="/cars" name="cars">
     <category id="5" path_url="/cars/honda" name="honda">
        <category id="12" path_url="/cars/honda/accord" name="accord"></category>
        <category id="3" path_url="/cars/honda/civic" name="civic"></category>
     </category>
     <category id="15" path_url="/cars/ford" name="ford">
          <category id="12" path_url="/cars/ford/escort" name="escort"></category>
     </category>
  </category>
  <category id="23423" path_url="/food" name="food>
  ....
  </category>
  ...
</root>

Key points:

  1. The sub-items should be ordered by sort order (INT)

by Blankman at December 17, 2014 09:41 PM

/r/clojure

Lobsters

/r/netsec

Lobsters

QuantOverflow

Do we need Feller condition if volatility process jumps?

It is fairly known that in affine processes, as Heston model \begin{equation} \begin{aligned} dS_t &= \mu S_t dt + \sqrt{v_t} S_t dW^{S}_{t} \\ dv_t &= k(\theta - v_t) dt + \xi \sqrt{v_t} dW^{v}_{t} \end{aligned} \end{equation} the SV $v_t$ is a strictly positive process if the drift is stronger enough, i.e. if drift parameters ($k$, the speed of mean-reverting, and $\theta$, mean-reverting level) and the Vol-of-Vol $\xi$ satisfy: \begin{equation} k \theta > \frac{1}{2} \xi^2 \end{equation} which is known as Feller condition. I know this condition can be generalized to multi-factor affine processes. For example, if the volatility of the returns $\log S_t$ is made of several independent factors $v_{1,t},v_{2,t},...,v_{n,t}$, then the Feller condition applies to each factor separately (check here at page 705, for example). Moreover Duffie and Kan (1996) provide a multidimensional extension of the Feller condition.

But I still don't understand if we still need the (or a sort of) Feller condition in case of jump-diffusion. You may consider for example the simple case of a volatility factor with exponentially distributed jumps: \begin{equation} dv_t = k(\theta - v_t) dt + \xi \sqrt{v_t} dW^{v}_{t} + dJ^{v}_{t} \end{equation} where $J^{v}_{t}$ is a compound Poisson process, independent of the Wiener $W^{v}_{t}$. The Poisson arrival intensity is a constant $\lambda$ with mean $\gamma$. I observe that in this case, the long term mean reverting level is jump-adjusted: \begin{equation} \theta \Longrightarrow \theta ^{*}=\theta + \frac{\lambda}{k} \gamma \end{equation} so I suspect if a sort of Feller condition applies it must depends on jumps.

Nevertheless, from a purely intuitive perspective, even if the barrier at $v_t = 0$ is absorbent, jump would pull back from 0 again.

Thanks for your time and attention.

by Gabriele Pompa at December 17, 2014 09:24 PM

StackOverflow

PHP : if statement does not work with form in

when i run this code it already says checked even if I haven't submitted any thing.. any help?? why the if statement isn't working

<?php 

if ( isset ( $_POST['roll'] ) && !empty($_POST['roll'])){
    echo 'checked' ;
}   
?>

<form action = 'test.php' method = 'POST'>
    <input type = 'submit' name  = 'roll' value = 'roll dice.'> 
</form>

by Hussain Wali at December 17, 2014 09:18 PM

Play Framework 2.2.2: JavaScript routing error - required: Int

object JavaScriptRouters extends Controller{
 def javascriptRoutes = Action { implicit request =>
  import routes.javascript._
  Ok(
   Routes.javascriptRouter("jsRoutes")(
    XyzController.getAbc
   )
  ).as("text/javascript")
 }
}

I have the above function as per the documentation

The XyzController is not an object but a class. I had a question about how to use this exact controller in the template. I tried to use the similar route here com.example.controllers.routes.javascript.XyzController.getAbc - In both the cases I get the same error.

[error]  found   : play.core.Router.JavascriptReverseRoute
[error]  required: Int
[error]         com.example.controllers.routes.javascript.XyzController.getAbc
[error]                                                                 ^
[error] one error found
[error] (compile:compile) Compilation failed

Also I am using activator v1.2.10

Thanks.

by Shashi at December 17, 2014 09:11 PM

Planet Emacsen

Irreal: let-alist

This looks really great.

by jcs at December 17, 2014 09:06 PM

StackOverflow

ansible include_var not working

I want to import variables from a playbook A into a playbook B :

Playbook B :

 
---
- hosts: portal
  sudo: no

  tasks:

  - include_vars: varz.yml

  - debug: var=vars

  - debug: var=x

playbook A :

  vars:

    x: 123
    y: abc

The result I get is :

TASK: [debug var=x] *********************************************************** 
ok: [192.168.78.10] => {
    "x": "{{ x }}"
}

I was expecting X: 123

by Max L. at December 17, 2014 09:03 PM

Fefe

3sat Kulturzeit hat mal "Fatalist" besucht, den Blogger ...

3sat Kulturzeit hat mal "Fatalist" besucht, den Blogger hinter NSU-Leaks. Am Ende haben sie ihn nur via Skype gekriegt, aber der Beitrag lohnt sich trotzdem. Beeilt euch, bevor das wieder "depubliziert" wird.

December 17, 2014 09:01 PM

Portland Pattern Repository

StackOverflow

how to put the result of an echo command into an ansible variable

I have $MY_VAR set to some value on the remote host, and I want to query it from a playbook (put it's value in an ansible variable), here's what I am seeing :

   - name: put shell var into ansible var
     command: echo $MY_VAR
     register: my_var

   - debug: var=my_var
ok: [192.168.78.10] => {
    "my_var": {
        "changed": true, 
        "cmd": [
            "echo", 
            "$my_var"
        ], 
        "delta": "0:00:00.002284", 
        "end": "2014-12-17 18:10:01.097217", 
        "invocation": {
            "module_args": "echo $my_var", 
            "module_name": "command"
        }, 
        "rc": 0, 
        "start": "2014-12-17 18:10:01.094933", 
        "stderr": "", 
        "stdout": "$my_var", 
        "stdout_lines": [
            "$my_var"
        ]
    }
}

note:

If I change the command to :

 command: pwd

then I get the expected result :

"my_var": {
  "stdout": "/home/vagrant", 
  "stdout_lines": [
      "/home/vagrant"  
  ]
}

It seems as if echo does not expand when called from ansible

by Max L. at December 17, 2014 09:00 PM

/r/netsec

Lobsters

AWS

EC2 Container Service In Action

We announced the Amazon EC2 Container Service at AWS re:Invent and invited you to join the preview. Since that time, we've seen a lot of interest and a correspondingly high signup rate for the preview. With the year winding down, I thought it would be fun to spend a morning putting the service through its paces. We have already approved all existing requests to join the preview; new requests are currently being approved within 24 hours.

As I noted in my earlier post, this new service will help you to build, run, and scale Docker-based applications. You'll benefit from easy cluster management, high performance, flexible scheduling, extensibility, portability, and AWS integration while running in an AWS-powered environment that is secure and efficient.

Quick Container Review
Before I dive in, let's take a minute to review some of the terminology and core concepts implemented by the Container Service.

  • Cluster - A logical grouping of Container Instances that is used to run Tasks.
  • Container Instance - An EC2 instance that runs the ECS Container Agent and that has been registered into a Cluster. The set of instances running within a Cluster create a pool of resources that can be used to run Tasks.
  • Task Definition - A description of a set of Containers. The information contained in a Task Description defines one or more Containers. All of the Containers defined in a particular Task Definition are run on the same Container Instance.
  • Task - An instantiation of a Task Definition.
  • Container - A Docker container that was created as part of a Task.

The ECS Container Agent runs on Container Instances. It is responsible for starting Containers on behalf of ECS. The agent itself runs within a Docker container (available on Docker Hub) and communicates with the Docker daemon running on the Instance.

When talking about a cluster or container service, "scheduling" refers to the process of assigning tasks to instances. The Container Service provides you with three scheduling options:

  1. Automated - The RunTask function will start a Task (as specified by a Task Definition) on a Cluster using random placement.
  2. Manual - The StartTaskfunction will start a Task (again, as specified by a Task Definition) on a specified Container Instance (or Instances).
  3. Custom - You can use the ListContainerInstances and DescribeContainerInstances functions to gather information about available resources within a Cluster, implement the "brain" of the schedule (in other words, use the available information to choose a suitable Container Instance), and then call StartTask to start a task on the Instance. When you do this you are, in effect, creating your own implementation of RunTask.

EC2 Container Service in Action
In order to gain some first-hand experience with ECS, I registered for the preview and then downloaded, installed, and configured a preview version of the AWS CLI. Then I created an IAM Role and a VPC and set about to create my cluster (ECS is currently available in US East (Northern Virginia) with support for other Regions expected in time). I ran the following command:

$ aws ecs create-cluster --cluster-name MyCluster --profile jbarr-cli

The command returned information about my new cluster as a block of JSON:

{
    "cluster": {
        "clusterName": "MyCluster", 
        "status": "ACTIVE", 
        "clusterArn": "arn:aws:ecs:us-east-1:348414629041:cluster/MyCluster"
    }
}

Then I launched a couple of EC2 instances into my VPC using an ECS-enabled AMI that had been shared with me as part of the preview process (this is a very lightweight version of the Amazon Linux AMI, optimized and tuned for ECS). I chose my new IAM Role (ecs) as part of the launch process:

I also edited the instance's User Data to make the instance launch in to my cluster:

After the instances launched I was able to see that they were part of my cluster:

$ aws ecs list-container-instances --cluster MyCluster --profile jbarr-cli
{
    "containerInstanceArns": [
        "arn:aws:ecs:us-east-1:348414629041:container-instance/4cf62484-da62-49a5-ad32-2015286a6d39", 
        "arn:aws:ecs:us-east-1:348414629041:container-instance/be672053-0ff8-4478-b136-7fae9225e493"
    ]
}

I can choose an instance and query it to find out more about the registered and available CPU and memory resources:

$ aws ecs describe-container-instances --cluster MyCluster \
  --container-instances arn:aws:ecs:us-east-1:348414629041:container-instance/4cf62484-da62-49a5-ad32-2015286a6d39 \
  --profile jbarr-cli

Here's an excerpt from the returned data:

{
            "registeredResources": [
                {
                    "integerValue": 1024, 
                    "longValue": 0, 
                    "type": "INTEGER", 
                    "name": "CPU", 
                    "doubleValue": 0.0
                }, 
                {
                    "integerValue": 3768, 
                    "longValue": 0, 
                    "type": "INTEGER", 
                    "name": "MEMORY", 
                    "doubleValue": 0.0
                }
            ]
}

Following the directions in the Container Service Developer Guide, I created a simple task definition and registered it:

$ aws ecs register-task-definition --family sleep360 \
  --container-definitions file://$HOME/tmp/task.json \
  --profile jbarr-cli

Then I ran 10 copies of the task:

aws ecs run-task --cluster MyCluster --task-definition sleep360:1 --count 10 --profile jbarr-cli

And I listed the running tasks:

$ aws ecs list-tasks --cluster MyCluster --profile jbarr-cli

This is what I saw:

{
    "taskArns": [
        "arn:aws:ecs:us-east-1:348414629041:task/0c949733-862c-4979-b5bd-d4f8b474c58e", 
        "arn:aws:ecs:us-east-1:348414629041:task/3ababde9-08dc-4fc9-b005-be5723d1d495", 
        "arn:aws:ecs:us-east-1:348414629041:task/602e13d2-681e-4c87-a1d9-74c139f7335e", 
        "arn:aws:ecs:us-east-1:348414629041:task/6d072f42-75da-4a84-8b68-4841fdfe600d", 
        "arn:aws:ecs:us-east-1:348414629041:task/6da6c947-8071-4111-9d31-b87b8b93cc53", 
        "arn:aws:ecs:us-east-1:348414629041:task/6ec9828a-cbfb-4a39-b491-7b7705113ad2", 
        "arn:aws:ecs:us-east-1:348414629041:task/87e29ab2-34be-4495-988b-c93ac1f8b77c", 
        "arn:aws:ecs:us-east-1:348414629041:task/ad4fc3cc-7e80-4681-b858-68ff46716fe5", 
        "arn:aws:ecs:us-east-1:348414629041:task/cdd221ea-837c-4108-9577-2e4f53376c12", 
        "arn:aws:ecs:us-east-1:348414629041:task/eab79263-087f-43d3-ae4c-1a89678c7101"
    ]
}

I spent some time describing the tasks and wrapped up by shutting down the instances. After going through all of this (and making a mistake or two along the way due to being so eager to get a cluster up and running), I'll leave you with three simple reminders:

  1. Make sure that your VPC has external connectivity enabled.
  2. Make sure to use the proper, ECS-enabled AMI.
  3. Make sure to launch the AMI with the requisite IAM Role.

ECS Quickstart Template
We have created an ECS Quickstart Template for CloudFormation to help you to get up and running even more quickly. The template creates an IAM Role and an Instance Profile for the Role. The Role supplies the permission that allows the ECS Agent to communicate with ECS. The template launches an instance using the Role and returns an SSH command that can be used to access the instance. You can launch the instance in to an existing cluster, or you can use the name "default" to create (if necessary) a default cluster. The instance is always launched within your Default VPC.

Contain Yourself
If you would like to get started with ECS, just register now and we'll get you up and running as soon as possible.

To learn more about ECS, spend 30 minutes watching this session from re:Invent (one caveat: the video is already a bit dated; for example, Task Definitions are no longer versioned):

You can also register for our upcoming (January 14th, 2015) webinar, Amazon EC2 Container Service Deep Dive. In this webinar, my colleague Deepak Singh will will talk about why we built EC2 Container Service, explain some of the core concepts, and show you how to use the service for your applications.

CoreOS is a new Linux distribution designed to support the needs of modern infrastructure stacks. The CoreOS AMI now supports ECS; you can read the Amazon ECS on CoreOS documentation to learn more.

As always, we are interested in your feedback. With ECS still in preview mode, now is the perfect time for you to let us know more about your needs. You can post your feedback to the ECS Forum. you can also create AWS Support cases if you are in need of assistance.

-- Jeff;

by Jeff Barr (awseditor@amazon.com) at December 17, 2014 08:51 PM

StackOverflow

compojure POST request parameters are empty when app deployed to heroku

My code is very simple:

(def form-test
  "<html><body><form action=\"/\" method=\"POST\"><input type=\"text\" name=\"ss\"/><input type=\"submit\" value=\"submit\"/></form></body></html>")

(defroutes app-routes
  (GET "/" [] form-test)
  (POST "/" req (str "the req: " req))
  (route/resources "/")
  (route/not-found "Not Found"))

(def app
  (handler/site app-routes))

whenever I try my app on my local machine it works fine, I can see the request parameters, but when I deploy the same thing to heroku the request parameters are always empty... what's going on?

by markg at December 17, 2014 08:33 PM

Zyre won't run on network only inproc

I'm trying to test the example of zyre (czmq) beetwen two ubuntu computers. I can compile correctly my code, but if i add a set_interval function it gives a core dump as soon as i connect another peer. and if don't specify set_interval it gets locked by zyre_recv() in a infinite loop. Both ends have same program with different node names here is my code:

#include "zyre.h"

//  This actor will listen and publish anything received
//  on the CHAT group


int main (int argc, char *argv[])
{
    zctx_t *ctx = zctx_new ();
    // Create two nodes
    zyre_t *node1 = zyre_new ("node1");
   //zyre_set_verbose(node1);
   //zyre_t *node2 = zyre_new ("node2");
   //zyre_set_header (node1, "X-FILEMQ", "tcp://128.0.0.1:6777");
   zyre_set_header (node1, "X-HELLO", "World");
   //zyre_set_interval (node1, 1000); //here it gives core dump if activated
   zyre_start (node1);
   //zyre_start (node2);
   zyre_join (node1, "GLOBAL");
   //zyre_join (node2, "GLOBAL");

   // Give time for them to interconnect
   //zclock_sleep (500);
   printf("%s\n",zyre_uuid(node1));
   //zyre_dump(node1);
   //zclock_sleep(500);
   //zyre_dump(node1);
   // One node shouts to GLOBAL
   zyre_shouts (node1,"GLOBAL", "Hello, World");
   zclock_sleep(200);
   zyre_dump(node1);
   //zyre_dump(node1);


   // TODO: should timeout and not hang if there's no networking
   // ALSO why doesn't this work with localhost? zbeacon?
   // Second node should receive ENTER, JOIN, and SHOUT
   zmsg_t *msg = zyre_recv (node1); //blocks here by default but does not
          recieve message of other peer
   assert (msg);
   char *command = zmsg_popstr (msg);
   assert (streq (command, "ENTER"));
   free (command);
   char *peerid = zmsg_popstr (msg);
   free (peerid);
   zframe_t *headers_packed = zmsg_pop (msg);

   assert (headers_packed);
   zhash_t *headers = zhash_unpack (headers_packed);
   assert (headers);
   zframe_destroy (&headers_packed);
   assert (streq (zhash_lookup (headers, "Y-HELLO"), "world"));
   zhash_destroy (&headers);
   zmsg_destroy (&msg);

   msg = zyre_recv (node1);
   assert (msg);
   command = zmsg_popstr (msg);
   assert (streq (command, "JOIN"));
   free (command);
   zmsg_destroy (&msg);

   msg = zyre_recv (node1);
   assert (msg);
   command = zmsg_popstr (msg);
   assert (streq (command, "SHOUT"));
   free (command);
   zmsg_destroy (&msg);

   zyre_stop (node1);
   //zyre_stop (node2);

   zyre_destroy (&node1);
   //zyre_destroy (&node2);
  zctx_destroy (&ctx);
  return 0;
}

Any clue to why this is happening thanks

by Santiago Regusci at December 17, 2014 08:26 PM

Wes Felter

"Do not tip members of the Night’s Watch."

“Do not tip members of the Night’s Watch.”

- Emin Gün Sirer brings some etiquette to the Bitcoin community

December 17, 2014 08:08 PM

CompsciOverflow

How do we manage, algorithmically, virtual computers on Cloud

I am trying to understand how the cloud work.

Every virtual computer is hosted in host-server, and every host-server is inside data centers.

  • Which algorithms are used to choose between host-servers to reduce cost (based on space-availability) ?

Can anyone help decribe the process or point to paper/book to check ?

by user3378649 at December 17, 2014 08:04 PM

StackOverflow

Scala error - package print is not a value

After consulting Google and not finding an answer to my question, I thought I'd ask.

The following code:

var i = 0

while (i < args.length) 
{
  if (i != 0)
      print(" ")
  print(args(i))
  i += 1
}

println()

..produces the following error unless I include the statement, import Console._

Package print is not a value

If I wanted to display output on the same line, is there an easier method?

Edit: I'm using Scala 2.11.4 on a Windows XP machine. Thanks.

by CaitlinG at December 17, 2014 08:04 PM

CompsciOverflow

How can I learn about CS? [on hold]

I am an Junior in college and I have come to the realization that my school didn't to that good of a job of actually teaching real CS to the students. On my own, I have become a fairly proficient programmer, but I know I would be a lot better If I was more confident in my ability to swim on a lower level of the stack. I hardly know any C. (The syntax is easy, but I've never actually implemented anything in it besides some 100 level HW problems. I see this as a problem because so much is written in C.)

I want to be:

  • better at discrete math
  • More versed in algorithms
  • Have a more intimate understanding of how the higher level tools that I rely on actually work.

Basically, I know how to use logic, but I have become dependent on high level tools and I don't like that.

I also suck at math. I mean I understand it conceptually, and Im comfortable with set theory, so, Python's data structures are really all I need, but I want to be better at algorithms and math is necessary for that.

Are there any websites or other resources that stress an approach like this.

by LukeP at December 17, 2014 08:03 PM

Portland Pattern Repository

TheoryOverflow

Quantum algorithms for QED computations related to the fine structure constants

My question is about quantum algorithms for QED (quantum electrodynamics) computations related to the fine structure constants. Such computations (as explained to me) amounts to computing Taylor-like series $$\sum c_k\alpha^k,$$ where $\alpha$ is the fine structure constant (around 1/137) and $c_k$ is the contribution of Feynman diagrams with $k$-loops.

This question was motivated by Peter Shor's comment (about QED and the fine structure constant) in a discussion regarding quantum computers on my blog. For some background here is a relevant Wikipedea article.

It is known that a) The first few terms of this computation gives very accurate estimations for relations between experimental outcomes which are with excellent agreement with experiments. b) The computations are very heavy and computing more terms is beyond our computational powers. c) At some points the computation will explode - in other words, the radius of convergence of this power series is zero.

My question is very simple: Can these computations be carried out efficiently on a quantum computer.

Question 1

1): Can we actually efficiently compute (or well-approximate) with a quantum computers the coefficients $c_k$.

2) (Weaker) Is it at least feasible to compute the estimates given by QEC computation in the regime before these coefficients explode?

3) (Even weaker) Is it at least feasible to compute the estimates given by these QEC computation as long as they are relevant. (Namely for those terms in the series that gives good approximation to the physics.)

A similar question applies to QCD computations for computing properties of the proton or neutron. (Aram Harrow made a related comment on my blog on QCD computations, and the comments by Alexander Vlasov are also relevant.) I would be happy to learn the situation for QCD computations as well.

Following Peter Shor's comment:

Question 2

Can quantum computation give the answer more accurately than is possible classically because the coefficients explode?

In other words

Will quantum computers allow to model the situation and to give

efficiently approximate answer to the actual physical quantities.

Another way to ask it:

Can we compute using quantum computers more and more digits of the fine structure constant, just like we can compute with a digital computer more and more digits of e and $\pi$?

(Ohh, I wish I was a believer :) )

more background

The hope that computations in quantum field theory can be carried our efficiently with quantum computers was (perhaps) one of Feynman’s motivation for QC. Important progress towards quantum algorithms for computations in quantum field theories was achieved in this paper: Stephen Jordan, Keith Lee, and John Preskill Quantum Algorithms for Quantum Field Theories. I don't know if the work by Jordan, Lee, and Preskill (or some subsequent work) implies an affirmative answer to my question (at least in its weaker forms).

A related question on the physics side

I am curious also if there are estimations for how many terms in the expansion before we witness explosion. (To put it on more formal ground: Are there estimates for the minimum k for which $\alpha c_k/c_{k+1} > 1/5$ (say).) And what is the quality of the approximation we can expect when we use these terms. In other words, how much better results can we expect from this QED computations with an unlimited computation power.

by Gil Kalai at December 17, 2014 07:57 PM

StackOverflow

How to best access nested JSON data from Cloud Endpoints

I have clients that deliver the payload I am interested in wrapped into a data object, e.g. like so

wrapper : {
  payload : { … },
  otherStuff : { … }
  …
}

I’d like to consume the 'payload' using an endpoints method. What is the best way to do this (given the clients' API cannot be changed)?

Things I tried: It seems like there is no option to specify a JSON path to ‘select’ just the nested payload, so I thought about constructing a generic Warpper entity that wraps Payload to resemble the JSON object, i.e. like so

class Wrapper[T <: AbstractPayload[T]] {
   @BeanProperty var payload : T = _

}

And then consume it in a controller like this:

@Api(
  name = "misc"
)
class ConcretePayloadController extends AbstractBaseController[ConcretePayload](classOf[ConcretePayload]) {}

and

abstract class AbstractBaseController[T <: AbstractPayload [T]](clazz : Class[T]) {
  @ApiMethod(
    httpMethod = HttpMethod.POST
  )
  def create(wrapper : Wrapper [T]) : T =  {
    val payload : T = wrapper.payload
  …
  }
}

where

class ConcretePayload extends AbstractPayload [T]{
  …
}

ConcretePayload is the payload I’m interested in. Since I have a lot of different classes and the wrapping object always looks the same, I’d like to have Wrapper remain generic.

Probablay due to type erasure this yields

java.lang.IllegalArgumentException: Parameterized type com.myCompany.Wrapper<com.myCompany.ConcretePayload> not supported.

Is there a way around this (avoiding ApiTransformers), or, another way to access nested JSON with cloud endpoints?

by user462982 at December 17, 2014 07:55 PM

clojure jsvc stop/destroy cleanup not working

Tyring to figure out why my cleanup function gets called but doesn't finish. It looks like stop and destroy are running and the cleanup is running, it's just not getting to the end of the function before the worker thread is getting killed (or something). Any ideas? If you notice in the log file, stuff being done in stop and destroy shows up and so does the "get to here" message in cleanup, but the "never get to here" in cleanup doesn't show up.

procject.clj:

(defproject stopjsvc "0.1.0-SNAPSHOT"
  :description "FIXME: write description"
  :url "http://example.com/FIXME"
  :license {:name "Eclipse Public License"
            :url "http://www.eclipse.org/legal/epl-v10.html"}
  :dependencies [[org.clojure/clojure "1.6.0"]
                 [org.apache.commons/commons-daemon "1.0.9"]
                 [org.clojure/tools.logging "0.3.1"]
                 [log4j/log4j "1.2.17" :exclusions [javax.mail/mail                 
                                                    javax.jms/jms                   
                                                    com.sun.jmdk/jmxtools           
                                                    com.sun.jmx/jmxri]]]            
  :main ^:skip-aot stopjsvc.core
  :target-path "target/%s"
  :profiles {:uberjar {:aot :all}})

core.clj:

(ns stopjsvc.core
  (:import [org.apache.commons.daemon Daemon DaemonContext])
  (:require [clojure.tools.logging :as log])
  (:gen-class :implements [org.apache.commons.daemon.Daemon]))

(def workers 1)
(def running (atom false))

(defn cleanup [number]
  (log/info (str "worker " number " get to here"))
  (Thread/sleep 5000) ; simulated cleanup
  (log/info (str "worker " number " *** never get to here")))

(defn worker [number]
  (log/info (str "worker " number))
  (Thread/sleep 1000) ; simulated work
  (if @running
    (recur number)
    (cleanup number)))

(defn -init [this ^DaemonContext context]
  (swap! running not))

(defn -start [this] 
  (doall (for [n (range 0 workers)] (future (worker n))))
  (log/info (str "started " workers " workers")))

(defn -stop [this]
  (log/info "*** enter stop")
  (swap! running not)
  (Thread/sleep 3000) ; wait for workers to cleanup
  (log/info "*** leave stop"))

(defn -destroy [this]
  (log/info "system stopped"))

start/stop script (Mac OS X):

#!/bin/bash

NAME="stopjsvc"

PID="/var/run/stopjsvc.pid"

jsvc_exec() {
  sudo jsvc -java-home "$(/usr/libexec/java_home)" \
            -server \
            -cp "$(pwd)/target/uberjar/stopjsvc-0.1.0-SNAPSHOT-standalone.jar" \
            -pidfile $PID \
            -outfile "$(pwd)/log/stopjsvc.log" \
            -errfile "$(pwd)/log/stopjsvc.log" \
            $1 \
            stopjsvc.core
}

case "$1" in
  start)

    # for now... since i keep forgetting
    lein uberjar

    echo "Starting $NAME"
    jsvc_exec
    echo "$NAME has started"
  ;;
  stop)
    echo "Stopping $NAME"
    jsvc_exec "-stop"
    echo "$NAME has stopped"
  ;;
  restart)
    if [ -f "$PID" ]; then
      echo "Restarting $NAME"
      jsvc_exec "-stop"
      jsvc_exec
      echo "$NAME has started"
    else
      echo "$NAME daemon not running"
      exit 1
    fi
  ;;
  *)
    echo "Usage: /etc/init.d/$NAME {start|stop|restart}" >&2
    exit 3
  ;;
esac

example log output:

2014-12-17 11:57:16,144 INFO stopjsvc.core: worker 0
2014-12-17 11:57:16,144 INFO stopjsvc.core: started 1 workers
2014-12-17 11:57:16,942 INFO stopjsvc.core: *** enter stop
2014-12-17 11:57:17,145 INFO stopjsvc.core: worker 0 get to here
2014-12-17 11:57:19,942 INFO stopjsvc.core: *** leave stop
2014-12-17 11:57:19,994 INFO stopjsvc.core: system stopped

by user3537248 at December 17, 2014 07:48 PM

Lobsters

StackOverflow

Spray.io test response not matching actual output

I'm trying to set up some tests for an API made by a coworker with spray.io, and I'm encountering some odd behavior. When a request results in an error for any reason, we want to return a JSON value along the lines of:

{"status":false,"message":"useful message here"}

This happens just fine in the actual browser. I have navigated to an unhandled route in the web browser, and I get the desired JSON value. So, I want to test this. Now, since I'm new to spray.io, I started off with the very simple test:

"leave GET requests to root path unhandled" in {
  Get() ~> myRoute ~> check {
    handled must beFalse
  }
}

This went fine, no problems. Since it's my first time playing with spray.io, I looked at some of the sample tests for testing false routes, and wrapped myRoute with sealRoute() so I could check the response without failing tests:

"leave GET requests to root path unhandled" in {
  Get() ~> sealRoute(myRoute) ~> check {
    handled must beTrue
  }
}

This also works fine. So, I decided to just make sure the text of the response was usable with this, before I went to the trouble of parsing JSON and verifying individual values:

"leave GET requests to root path unhandled" in {
  Get() ~> sealRoute(myRoute) ~> check {
    responseAs[String] contains "false"
  }
}

This is failing. To investigate, I threw a simple line of code in to log the actual value of responseAs[String] to a file, and I got this:

The requested resource could not be found.

Can anyone tell me what I'm doing wrong? I'm thinking that one of the following is occurring:

  • responseAs[String] is doing more than taking the exact response and giving it back to me, applying some type of filter along the way
  • The framework itself is not fully evaluating the query, but rather making a mockup object for the test framework to evaluate, and therefore not executing the desired 'turn errors to json' methods that my co-worker has implemented

I have tried searching google and stack overflow specifically for similar issues, but I'm either not putting in the right queries, or most other people are content to have the default error messages and aren't trying to test them beyond checking handled must beFalse.

Edit - This is the relevant part of the RejectionHandler:

case MissingQueryParamRejection(paramName) :: _=>
  respondWithMediaType(`application/json`) {
    complete(BadRequest, toJson(Map("status" -> false, "message" -> s"Missing parameter $paramName, request denied")))
  }

by soong at December 17, 2014 07:36 PM

Planet Clojure

Greetings from the Functional Frontier!

or, How We Lost Grandma to Dysentary but Gained Stateless UI

We had a blast at Prismatic HQ last week, with a lineup of talks exploring the leading edge of functional programming on the frontend:

Prismatic's own Logan Linn kicked off the evening explaining how we increase predictability and reduce complexity at Prismatic using single-direction data flows with ClojureScript.

Jordan Garcia, of Optimizely followed, with ways to use mutability, one-way data flow and pure functions in the frontend with javascript.showing how they use Flux at Optimizely as a decoupled way to model application and UI state.

Richard Feldman of NoRedInk introduced us to Elm, a functional frontend language for making stateless UIs in the browser. He first tried Elm when building his own personal project, Dreamwriter, and fell down a glorious rabbit hole of immutable data and stateless functions from which he has yet to emerge.

For those who joined us, thanks so much for making the event such a blast. Thanks also to Marco, our esteemed MC for the evening, who made jaws drop with some nerdy programming jokes.

For those who couldn’t make it (or just really want to hear Marco’s jokes again), click through to watch the video of the entire event:

by Prismatic at December 17, 2014 07:22 PM

TheoryOverflow

Karp reduction/many-one reduction [on hold]

Why is Karp reduction also called "many-one reduction"? What do the 'many' and the 'one' stand for? I tried looking at wikipedia and read some books but I did not find any explantion. I do understand what Karp reduction is, but I can't understand why it is called "many to one". I guess it is as opposed to Cook reduction. I will be happy if you could explain this to me.

by Student at December 17, 2014 07:13 PM

Fefe

So langsam frage ich mich ja, was dieser Papst als ...

So langsam frage ich mich ja, was dieser Papst als nächstes schafft. Den Nahostkonflikt lösen? Wer die USA dazu kriegt, in Kuba eine Botschaft aufzumachen, der schafft auch das.

December 17, 2014 07:00 PM

StackOverflow

Scala Polymorphism

I have the following code:

trait SuperX {
 val v: Int
}

class SubY(val v: Int, var z: SuperX) extends SuperX

class SubZ(val v: Int) extends SuperX

and I don't understand why this is not possible

var test: SuperX = new SubY(1, new SubZ(-1))
println(test.z.v)

If I write it as

var test = new SubY(1, new SubZ(-1))

then I am not able to do

test = test.z

I'm new to Scala, so some things are quite confusing. I know it's possible in Java with an interface instead of a trait.

Thanks for your help.

by Robin64 at December 17, 2014 06:59 PM

/r/netsec

TheoryOverflow

Numerical eigenbasis for a unitary

Do you know what numerical software computes an eigenvector basis for a unitary matrix?

Say I have a unitary matrix $U$. If its eigenvalues are simple (no multiplicities), then for instance Matlab computes an eigenbasis for $U$. However, if there are some eigenvalues with multiplicities, in the subspace for the eigenvalues with multiplicity the software does not find independent eigenvectors. If a matrix is symmetric or Hermitian, Matlab is programmed to output an eigenbasis (even if there are eigenvalues with multiplicities). No such thing for unitary matrices - as fas as I know.

I found a way to avoid this: if $\lambda$ is an eigenvalue with multiplicity, then I can form the matrix $B=A-\lambda\cdot\mathbf{1}$ and find the nullity of $B$. The only problem is that doing this for each possible eigenvalue is slow. I wonder if there is a better solution.

If that makes a difference, I can assume that my unitary matrix is real.

Thanks, and I apologize if the question is trivial.

by costelus at December 17, 2014 06:20 PM

StackOverflow

What parts of Java (Standard library) do you need to know to write Scala code?

What parts of Java (Standard library) do you need to know to write Scala code?

I'm teaching myself Scala and I'm really loving the language but I'm a little disturbed that you also have to know some of the Java standard library to write Scala code.

I'm not a Java programmer so I'm curious what 'main' parts of the Java standard library are not represented in the Scala standard library?

by G4143 at December 17, 2014 06:17 PM

/r/scala

QuantOverflow

TAQ NYSE OpenBook

Where can I get/buy the TAQ NYSE OpenBook for specific stocks on specific days? I don't need a whole year of all stocks. I just want to enter a day and a stock, so I can download the order book data of that day.

by Markus Selinger at December 17, 2014 06:10 PM

/r/compsci

DataTau

StackOverflow

SchemaRDD: Too many arguments in method signature

I was required to create a dataset to apply kmeans-clustering to it. The schema consists in approximately 1.7 million rows of 662 "columns". But the data still needs a little bit of cleansing (there are around of 200.000 repeated rows) for such thing I thought I'd just apply a SchemaRDD.distinct(). I know it can be simply done by using the RDD.distinct() (which we did) without the need to define:

class tbl_dataset_ced_subcat(val x1, val x2,..., val xn) extends Product with Serializable {...}

But I tried doing so, and when I got to the part of parsing the file into the RDD and registering the table got the exception "Too many arguments in method signature" when receiving 662 parameters.

sc.textFile("hdfs://" + globals.server + ":9000/output/consolidado_clientes.txt")
  .map(line => globals.pattern.split(line, -1))
  .map(t => new tbl_dataset_ced_subcat(t(0), t(1),...,t(661)))
  .registerTempTable("clientes_subcat")

So, the questions are: is there a maximum of fields/arguments a SchemaRDD can have? If the number of arguments is above 255 (which I believe is the max. number of arguments a method can have for the JVM) how can I use all of the 662? is it impossible in spark, should I be using something like Hbase and it's column families for processing them?

It's all just academic, but I'd like to understand a little bit more about RDDs, SchemaRDDs and their limitations and workarounds

Edit:

Here's the exception

Exception in thread "Driver" java.lang.reflect.InvocationTargetException
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
    at Sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:606)
    at org.apache.spark.deploy.yarn.ApplicationMaster$$anon$2.run(ApplicationMaster.scala:162)
Caused by: java.lang.ClassFormatError: Too many arguments in method signature in class file depTransData$tbl_dataset_ced_subcat
    at java.lang.ClassLoader.defineClass1(Native Method)
    at java.lang.ClassLoader.defineClass(ClassLoader.java:800)
    at java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142)
    at java.net.URLClassLoader.defineClass(URLClassLoader.java:449)
    at java.net.URLClassLoader.access$100(URLClassLoader.java:71)
    at java.net.URLClassLoader$1.run(URLClassLoader.java:361)
    at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
    at java.security.AccessController.doPrivileged(Native Method)
    at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
    at java.lang.ClassLoader.loadClass(ClassLoader.java:425)
    at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308)
    at java.lang.ClassLoader.loadClass(ClassLoader.java:358)
    at depTransData$.main(depuracion_dataset_ced_scat.scala:721)
    at depTransData.main(depuracion_dataset_ced_scat.scala)
    ... 5 more

by edC0der at December 17, 2014 05:59 PM

TheoryOverflow

Lower bound on estimating $\sum_{k=1}^n a_k$ for non-increasing $(a_k)_k$

I'd like to know (related to this other question) if lower bounds were known for the following testing problem: one is given query access to a sequence of non-negative numbers $a_n \geq \dots\geq a_1$ and $\varepsilon \in (0,1)$, with the promise that either $\sum_{k=1}^n a_k = 1$ or $\sum_{k=1}^n a_k \leq 1-\varepsilon$.

How many queries (lookups) are sufficient and necessary for an (adaptive) randomized algorithm to distinguish between the two cases, with probability at least $2/3$?

I have found a previous post that gives a logarithmic (in $n$) upper bound for the related problem of approximating the sum, and a roughly matching lower bound on that problem for deterministic algorithms; but couldn't find a result for the specific problem I am considering (in particular, randomized algorithms).

by Clement C. at December 17, 2014 05:35 PM

StackOverflow

Connecting to a datomic-free instance hosted on EC2 from outside AWS?

I've installed and run cldwalker's datomic-free receipe https://github.com/cldwalker/datomic-free on an EC2 instance.

;=> System started datomic:free://{EC2 private IP address}:4334/<DB-NAME>, 
;   storing data in: data

My free-transactor.properties file looks like this:

protocol=free
host={EC2 private IP address}
port=4334
h2-port=4335
h2-web-port=4336

I want to connect the database from outside EC2:

(require '[datomic.api :as d]) 

(def uri "datomic:free://{EC2 public IP address}:4334/om_async?h2-
port=4335&h2-web-port=4336&aws_secret_key={xxx}&aws_access_key_id={yyy}")

(d/create-database uri)

But I get:

clojure.lang.ExceptionInfo: Error communicating with HOST 
{EC2 private IP address} on PORT 4334 :: {:timestamp 1418304487036, 
:host "{EC2 private IP address}", :version "0.9.5078", :port 4334, 
:username "{XXX}", :peer-version 2, :alt-host nil, :password "{YYY}", 
:encrypt-channel true}

What should I do to make this work?

UPDATE:

I have found the "Free Transactor on EC2" thread in the Datomic Google Group: https://groups.google.com/d/msg/datomic/wBRZNyHm03o/0SdNhqjF27wJ

Does this means I can only connect to Datomic-free if my app run on the same server?

Would I have the same problem (not being able to access db from outside server) if I get Datomic-free hosted on linode or digitalocean?

Thanks in advance for your help!

by leontalbot at December 17, 2014 05:32 PM

CompsciOverflow

Calculating the number of unique BST generatable from n keys, why is my number so large

I want to find the number of distinct BSTs I can get with 3 unique keys (i.e. 1, 2, 3)

Here's my solution: enter image description here

  • In case 1, we have each node have possibility, 3, 2, 1, respectively, so 3*2*1 = 6 ways

  • In case 2, we have the same situation, the top node can be 1, 2, 3, three choices, second node two choices, so and so forth, so I get 6 ways

  • In case 3, it is same to case 2 and I get 6 ways

In the end I have 6 + 6 + 6 number of the beast = 18 different threes.

(Edited!)

Why does this answer from Stackoverflow based on so called Catalan Number only give me 5 trees?

by Math Newb at December 17, 2014 05:31 PM

What is the complexity of recurrence $T(n) = T(n-1) + 1/n$ [duplicate]

This question already has an answer here:

What is the complexity of the follwoing recurrence? $$T(n) = T(n-1) + 1/n$$

I highly suspect the answer to be $O(1)$, because your work reduces by $1$ each time, so by the $n$th time it would be $T(n-n) = T(0)$ and your initial task reduces to 0.

by Math Newb at December 17, 2014 05:29 PM

Lobsters

/r/netsec

Lobsters

Dave Winer

What if the RIAA had embraced Napster?

Back in 2000 when Napster was raging, I kept writing blog posts asking this basic question. Isn't there some way the music industry can make billions of dollars off the new excitement in music?

Turns out there was. Ask all the streaming music services that have been born since the huge war that the music industry had with the Internet. Was it necessary? Would they have done better if they had embraced the inevitable change instead of trying to hold it back? The answer is always, yes, it seems.

Well, now it seems Sony is doing it again, on behalf of the movie industry. Going to war with the Internet. Only now in 2014, the Internet is no longer a novel plaything, it's the underpinning of our civilization, and that includes the entertainment industry. But all they see is the evil side of the net. They don't get the idea that all their customers are now on the net. Yeah there might be a few holdouts here and there, but not many.

What if instead of going to war, they tried to work with the good that's on the Internet? It has shown over and over it responds. People basically want a way to feel good about themselves. To do good. To make the world better. To not feel powerless. It's perverted perhaps to think that Hollywood which is so averse to change, could try to use this goodwill to make money, but I think they could, if they appealed to our imaginations instead of fear.

December 17, 2014 05:17 PM

/r/clojure

Fefe

Kurze Durchsage der US-Flugsicherheitsbehörde TSA:The ...

Kurze Durchsage der US-Flugsicherheitsbehörde TSA:
The Transportation Security Administration needed an exemption from new Obama administration rules restricting racial profiling by the government so the TSA could target travelers for extra scrutiny based on their nationality and gender, the head of the agency said Tuesday.
Habt ihr es gesehen? Die TSA diskriminiert gar nicht nach Rasse, sondern nach Nationalität!1!! Das ist doch mal was ganz anderes! Das darf man doch nicht in einen Topf werfen!

December 17, 2014 05:00 PM

Hamas muss von der Liste der Terrororganisationen der ...

Hamas muss von der Liste der Terrororganisationen der EU heruntergenommen werden. Begründung: Die Aufnahme der Hamas auf die Liste basierte nicht auf einer Untersuchung davon, was Hamas tatsächlich getan hatte, sondern auf Hörensagen aus dem Internet. Das hat das Gericht der Europäischen Union entschieden.

December 17, 2014 05:00 PM

High Scalability

The Big Problem is Medium Data

This is a guest post by Matt Hunt, who leads open source projects for Bloomberg LP R&D. 

“Big Data” systems continue to attract substantial funding, attention, and excitement. As with many new technologies, they are neither a panacea, nor even a good fit for many common uses. Yet they also hold great promise. The question is, can systems originally designed to serve hundreds of millions of requests for something like web pages also work for requests that are computationally expensive and have tight tolerances?

Modern era big data technologies are a solution to an economics problem faced by Google and other Internet giants a decade ago. Storing, indexing, and responding to searches against all web pages required tremendous amounts of disk space and computer power. Very powerful machines, fast SAN storage, and data center space were prohibitively expensive. The solution was to pack cheap commodity machines as tightly together as possible with local disks.

This addressed the space and hardware cost problem, but introduced a software challenge. Writing distributed code is hard, and with many machines comes many failures. So a framework was also required to take care of such problems automatically for the system to be viable.

Hadoop

Right now, we’re in a transition phase in the industry in computing built from the entrance of Hadoop and its community starting in 2004. Understanding why and how these systems were created also offers insight into some of their weaknesses.  

At Bloomberg that we don’t have a big data problem. What we have is a “medium data” problem -- and so does everyone else.   Systems such as Hadoop and Spark are less efficient and mature for these typical low latency enterprise uses in general. High core counts, SSDs, and large RAM footprints are common today - but many of the commodity platforms have yet to take full advantage of them, and challenges remain.  A number of distributed components are further hampered by Java, which creates its own complications for low latency performance.

A practical use case

by Todd Hoff at December 17, 2014 04:56 PM

StackOverflow

How to query to mongo using spark?

I am using spark and mongo. I am able to connect to mongo using following code:

val sc = new SparkContext("local", "Hello from scala")

val config = new Configuration()
config.set("mongo.input.uri", "mongodb://127.0.0.1:27017/dbName.collectionName")
val mongoRDD = sc.newAPIHadoopRDD(config, classOf[com.mongodb.hadoop.MongoInputFormat], classOf[Object], classOf[BSONObject])

above code gives me all documents from collection.

Now I want to apply some conditions on query.

For that I used

config.set("mongo.input.query","{customerId: 'some mongo id'}")

This took only one condition at a time. I want to add a condition if 'usage' > 30

1) How do I add multiple conditions to mongo query (including greater than and less than) using spark and mongo??

Also I want to iterate over each document of result of query using scala??

2) How do I iterate through result using scala??

by Vishwas at December 17, 2014 04:42 PM

/r/emacs

StackOverflow

Ansible copy module with {{ item }}

I have a yml file for variables which goes like this.

- newHosts     
   - hostIP: 192.168.1.22
     filename: file1 
   - hostIP: 192.168.1.23   
     filename: file2   

I am using add_host: {{ item.hostIP }} with_items {{ newHosts }}. I want to copy respective file to respective hosts with something like {{ item.filename }} but it copies all files to each host. How I just copy only the corresponding file to the node. How can I do that?

by Fazal-e-Rehman Khan at December 17, 2014 04:33 PM

Planet Clojure

Clojure/conj in Five Talks

clojureconj.png

By Jason Lewis and Milt Reder

We were fortunate to be able to attend Clojure/conj this year in Washington, D.C. Below you will find some commentary on our five favorite talks followed by some highlights from the conference.

Read the post, then click on the titles to go directly to the videos.

Zach Oakes - Making Games at Runtime with Clojure

Zach Oakes is an amazing guy. In his talk he talks about how he pivoted his life from the lucrative world of cryptography and steganography to teaching computer science through game development. He's produced several open source projects to simplify onboarding, including the Nightcode IDE, Nightmod, a tool for making live-moddable games, and the play-clj library to support easy game development.

He's currently teaching programming at his local library (and not getting paid for it), so you should fund him on Gratipay.

Rich Hickey - Inside Transducers

As usual, Rich's keynote is a little hard to summarize without getting into technical minutiae. In fact, if you haven't seen his Strange Loop talk on transducers, you might want to go back and watch that one first. Now that you have plans for the next two hours or so, we'll try to briefly explain why these are cool.

Functional programming languages generally include powerful higher-order functions for manipulating collections. Especially Lisp. Especially especially Clojure.

The most common ones are functions such as map, which given a function f will map than function over a collection, and reduce, which will apply a function to each element of a collection, returning the production of combining them. So for instance, (map inc [1 2 3]) would map the inc(rement) function over the vector and return [2 3 4]. (reduce * [1 2 3]) would reduce the collection with the * (multiply) function, and return 6.

Transducers let us write that (map inc) or (reduce *) or (filter odd?) piece, without supplying a collection, and get back a transducer, reducing function transformers that we can compose together.

Okay, that's probably a lot for this post. Watch the videos, it'll make sense eventually.

Lucas Cavalcanti & Edward Wible - Exploring Four Hidden Superpowers of Datomic

Lucas and Edward are building a bank from scratch in Brazil, using Clojure and Datomic. Their company, Nubank, netted 14.3M from Sequoia. In this talk, they explain why Clojure (and particularly Datomic) provide an ideal basis for building financial software, whether that's the temporal component of Datomic or its transactional integrity, or machine learning applications for underwriting and fraud detection.

Bozhidar Batsov - The Evolution of the Emacs Tooling for Clojure

We both love the GNU Emacs text editor, and Bozhidar Batsov (Bug for short) has contributed some amazing tools to the Emacs/Clojure communities, including Prelude, a batteries included set of default configurations to get started with Emacs, and CIDER, which turns Emacs into a powerful IDE with an integrated Clojure REPL.

Bug talks about future developments for CIDER and cider-nrepl. Exciting stuff for the world of Emacs and Clojure.

Brian Goetz - Stewardship: The Sobering Parts

This was the closing keynote of the conference, and it was incredible. If you only watch one of these talks, watch this one.

The JVM has been around since 1995, and in that time has evolved significantly. With over 9 million developers, change happens incrementally, in order to support extant code while making progress possible. Brian talks about changes coming in Java 8 and 9, and some of the design decisions supporting them. Two of them are especially interesting.

First, it's been public knowledge that Java 8 is getting lambda expressions for some time. What we didn't know is that the Java 8 lambdas are actually syntactic sugar over interfaces and anonymous inner classes.

Second (and this is potentially huge) is the removal of what Brian called a "stupid security model" in Java 1.0. Since the beginning, Java's security model has been based on frame counting, to prevent frame injection. What this means for developers (and especially for a Lisp like Clojure) is that functions can't re-use stack frames, so tail call optimization is impossible on the JVM. With this change (coming in Java 9, I believe), we might start to see tail call optimization on the JVM. Not guaranteed, but at least possible.

Around the Conference

Like all good conferences, Clojure/conj threw an awesome party. This year it was at the National Museum of Crime and Punishment, one of the best interactive museums we'd never heard of. So in addition to free drinks and incredibly geeky conversation, we got to crack safes, get our fingerprints taken, and shoot rifles (not real ones).

The Unsessions were also great. Stu Halloway and Tim Ewald did an unconference session on Datomic, the database technology that underpins a lot of our infrastructure. It was great getting to have a live Q&A with the folks who actually built the tech you're running your business on.

All in all, one of the best dev conferences we've ever been to.

If you're interested in Clojure but couldn't make it to the conj, all of the talks are available on the Clojure/conj 2014 YouTube playlist, and don't forget Yet Analytics hosts the Baltimore Clojure Meetup the third Tuesday of each month!

by Yet Analytics at December 17, 2014 04:24 PM

StackOverflow

How to fix the Product Type Inferred error from Scala's WartRemover tool

I'm using WartRemover tool to avoid possible errors in my Scala 2.11 code.

Specifically, I want to know how to fix the "Product Type Inferred" error.

Looking at the repo documentation, I can only see the failure example, but I would like to know how I'm suppose to fix that error:

https://github.com/puffnfresh/wartremover#product.

Doing my homework, I end up with this other link that explains how to fix Type Inference Failures errors https://blog.cppcabrera.com/posts/scala-wart-remover.html. And I quote "If you see any of the warnings below, the fix is usually as simple as providing type annotations" but I don't understand what that means. I really need a concrete example.

by Lynx CR at December 17, 2014 04:11 PM

Daniel Lemire

Optimizing polymorphic code in Java

Oracle’s Java is a fast language… sometimes just as fast as C++. In Java, we commonly use polymorphism through interfaces, inheritance or wrapper classes to make our software more flexible. Unfortunately, when polymorphism is involved with lots of function calls, Java’s performance can go bad. Part of the problem is that Java is shy about fully inlining code, even when it would be entirely safe to do so.

Consider the case where we want to abstract out integer arrays with an interface:

public interface Array {
    public int get(int i);
    public void set(int i, int x);
    public int size();
}

Why would you want to do that? Maybe because your data can be in a database, on a network, on disk or in some other data structure. You want to write your code once, and not have to worry about how the array is implemented.

It is not difficult to produce a class that is effectively equivalent to a standard Java array, except that it implements this interface:

public final class NaiveArray implements Array {
    protected int[] array;
    
    public NaiveArray(int cap) {
        array = new int[cap];
    }
    
    public int get(int i) {
        return array[i];
    }
    
    public void set(int i, int x) {
        array[i] = x;  
    }
    
    public int size() {
        return array.length;
    }
}

At least in theory, this NaiveArray class should not cause any performance problem. The class is final, all methods are short.

Unfortunately, on a simple benchmark, you should expect NaiveArray to be over 5 times slower than a standard array when used as an Array instance, as in this example:

public int compute() {
   for(int k = 0; k < array.size(); ++k) 
      array.set(k,k);
   int sum = 0;
   for(int k = 0; k < array.size(); ++k) 
      sum += array.get(k);
   return sum;
}

You can alleviate the problem somewhat by using NaiveArray as an instance of NaiveArray (avoiding polymorphism). Unfortunately, the result is still going to be more than 3 times slower, and you just lost the benefit of polymorphism.

So how do you force Java to inline function calls?

A viable workaround is to inline the functions by hand. You can to use the keyword instanceof to provide optimized implementations, falling back on a (slower) generic implementation otherwise. For example, if you use the following code, NaiveArray does become just as fast as a standard array:

public int compute() {
     if(array instanceof NaiveArray) {
        int[] back = ((NaiveArray) array).array;
        for(int k = 0; k < back.length; ++k) 
           back[k] = k;
        int sum = 0;
        for(int k = 0; k < back.length; ++k) 
           sum += back[k];
        return sum;
     }
     //...
}

Of course, I also introduce a maintenance problem as the same algorithm needs to be implemented more than once… but when performance matters, this is an acceptable alternative.

As usual, my benchmarking code is available online.

To summarize:

  • Java fails to fully inline frequent function calls even when it could and should. This can become a serious performance problem.
  • Declaring classes as final does not seem to alleviate the problem.
  • A viable workaround for expensive functions is to optimize the polymorphic code by hand, inlining the function calls yourself. Using the instanceof keyword, you can write code for specific classes and, thus, preserve the flexibility of polymorphism.

by Daniel Lemire at December 17, 2014 04:10 PM

TheoryOverflow

best known space lower bound for SAT?

Following on from a previous question,

what are the best current space lower bounds for SAT?

With a space lower bound I here mean the number of worktape cells used by a Turing machine which uses a binary worktape alphabet. A constant additive term is unavoidable since a TM can use internal states to simulate any fixed number of worktape cells. However, I am interested in controlling the multiplicative constant that is often left implicit: the usual setup allows arbitrary constant compression via larger alphabets so the multiplicative constant is not relevant there, but with a fixed alphabet it should be possible to take it into account.

For instance, SAT requires more than $\log\log n + c$ space; if not then this space upper bound would lead to a time upper bound of $n^{1+o(1)}$ by simulation, and thereby the combined $n^{1.801+o(1)}$ space-time lower bound for SAT would be violated (see the linked question). It also seems possible to improve this argument to argue that SAT requires at least $\delta\log n + c$ space for some small positive $\delta$ that is something like $0.801/C$, where $C$ is the constant exponent in simulation of a space-bounded TM by a time-bounded TM.

Unfortunately $C$ is usually quite large (and certainly at least 2 in the usual simulation, where the tapes of a TM are first encoded on a single tape via a larger alphabet). Such bounds with $\delta \ll 1$ are rather weak, and I would be especially interested in a space lower bound of $\log n + c$. An unconditional time lower bound of $\Omega(n^d)$ steps, for some large enough constant $d > 1$, would imply such a space lower bound via simulation. However, time lower bounds of $\Omega(n^d)$ for $d>1$ are not currently known, let alone for large $d$.

Put differently, I'm looking for something that would be a consequence of superlinear time lower bounds for SAT, but which might be possible to obtain more directly.

by András Salamon at December 17, 2014 04:06 PM

"Snake" reconfiguration problem

While writing a small post on the complexity of the videogames Nibbler and Snake; I found that they both can be modeled as reconfiguration problems on planar graphs; and it seems unlikely that such problems have not been well studied in the motion planning area (imagine for example a chain of linked carriages or robots). The games are well known, however this is a short description of the related reconfiguration model:

SNAKE PROBLEM

Input: given a planar graph $G = (V,E)$, $l$ pebbles $p_1,...,p_l$ are placed on nodes $u_1,...,u_l$ that form a simple path. The pebbles represent the snake, and the first one $p_1$ is his head. The head can be moved from its current position to an adjacent free node, and the body follows it. Some nodes are marked with a dot; when the head reaches a node with a dot, the body will increase by $e$ pebbles in the following $e$ moves of the head. The dot on the node is deleted after the traversal of the snake.

Problem: We ask if the snake can be moved along the graph and reach a target configuration $T$ where the target configuration is the full description of the snake position, i.e. the position of the pebbles.

It is easy to prove that the SNAKE problem is NP-hard on planar graphs of max degree 3 even if no dots are used and also on SOLID grid graphs if we can use an arbitrary number of dots. Things get complicated on solid grid graphs without dots (it is related to another open problem).

I would like to know if the problem has been studied under another name;
and, in particular, if there is a proof that it is in NP; i.e. given a full initial configuration and a full final target configuration the solution that leads the snake in the target configuration has polynomial length (note that the dots are irrelevant: if it is in NP when dots are not allowed then it remains in NP if dots are allowed).

(consider for example if the target configuration partially overlaps with the initial configuration of the snake)

Edit: a simple example (pebbles are shown in green, the snake's head is P1):
enter image description here

by Marzio De Biasi at December 17, 2014 04:03 PM

Fefe

Old and busted: TTIP.New hotness: TiSA.Der im Geheimen ...

Old and busted: TTIP.

New hotness: TiSA.

Der im Geheimen verhandelte TiSA-Handelsvertrag — kurz für “Agreement on Trade in Services” — gefährdet den Schutz persönlicher Daten beim Transfer zwischen Staaten. Das beweist ein geleakter Verhandlungsstand, den wir [Netzpolitik.org] in journalistischer Partnerschaft mit Associated Whistleblowing Press und ihrer lokalen, spanischen Plattform filtrala.org exklusiv veröffentlichen.

December 17, 2014 04:00 PM

Kurze Durchsage des NSA-Generalinspekteurs:"Wenn Sie ...

Kurze Durchsage des NSA-Generalinspekteurs:
"Wenn Sie der Kanzler von Deutschland sind, haben Sie kein privates Mobiltelefon"

December 17, 2014 04:00 PM

QuantOverflow

what is a typical way forex brokerages can provide cheap leverage for their customers?

I'm not very well read in the area of high finance but I'm curious how forex brokerages are able to provide the backing for leverage that they can provide to customers.

Is it possible to do this without charging interest, only making the return on the spread against the rates they can get?

Are there standard algorithms that can be used to this end?

by barrymac at December 17, 2014 03:52 PM

StackOverflow

Converting compressed data in array of bytes to string

Suppose I have an Array[Byte] called cmp. val cmp = Array[Byte](120, -100). Now, new String(cmp) gives x�, and (new String(cmp)).getBytes gives Array(120, -17, -65, -67) which isn't equal to the original Array[Byte](120, -100). This byte of -100 was part of an Array[Byte] obtained by compressing some string using Zlib.

Note: These operations were done in Scala's repl.

by Kamal Banga at December 17, 2014 03:46 PM

/r/emacs

Emacs, org-mode, blogspot?

Are there any good tools for publishing to blogspot blogs? I can find things for Wordpress, but I haven't come across anything for Blogspot. (If I were starting from scratch, I would probably just use Wordpress, but I already have fairly well established blogspot blogs.)

submitted by emacsomancer
[link] [2 comments]

December 17, 2014 03:46 PM

StackOverflow

Overcoming type erasure in Scala when pattern matching on Objects which may be different Sets or any type of Object

Is there any way of pattern matching objects where the objects may be Set[Foo] or Set[Bar] when the matching object can be any Object.

Given the below code, trying to pattern match on Set[Bar] will result in a match of Set[Foo] because of type erasure.

import play.api.libs.json._

import scala.collection.immutable.HashMap

case class Foo(valOne: Int, valTwo: Double)

object Foo {
  implicit val writesFoo = Json.writes[Foo]
}

case class Bar(valOne: String)

object Bar {
  implicit val writesBar = Json.writes[Bar]
}

case class TestRequest(params: Map[String, Object])

object TestRequest {

  import play.api.libs.json.Json.JsValueWrapper

  implicit val writeAnyMapFormat = new Writes[Map[String, Object]] {

  def writes(map: Map[String, Object]): JsValue = {
    Json.obj(map.map {
    case (s, a) => {
      val ret: (String, JsValueWrapper) = a match {
        case _: String => s -> JsString(a.asInstanceOf[String])
        case _: java.util.Date => s -> JsString(a.asInstanceOf[String])
        case _: Integer => s -> JsString(a.toString)
        case _: java.lang.Double => s -> JsString(a.toString)
        case None => s -> JsNull
        case foo: Set[Foo] => s -> Json.toJson(a.asInstanceOf[Set[Foo]])
        case bar: Set[Bar] => s -> Json.toJson(a.asInstanceOf[Set[Bar]])
        case str: Set[String] => s -> Json.toJson(a.asInstanceOf[Set[String]])
      }
      ret
    }}.toSeq: _*)
    }
  }

  implicit val writesTestRequest = Json.writes[TestRequest]
}

object MakeTestRequest extends App {
  val params = HashMap[String, Object]("name" -> "NAME", "fooSet" -> Set(Foo(1, 2.0)), "barSet" -> Set(Bar("val1")))

  val testRequest = new TestRequest(params)

  println(Json.toJson(testRequest))

}

Trying to serialise the TestRequest will result in:

Exception in thread "main" java.lang.ClassCastException: Bar cannot be cast to Foo

Delegating the pattern matching of Sets to another method in an attempt to get the TypeTag,

        case _ => s -> matchSet(a)

results in the type, unsurprisingly, of Object.

def matchSet[A: TypeTag](set: A): JsValue = typeOf[A] match {
    case fooSet: Set[Foo] if typeOf[A] =:= typeOf[Foo] => Json.toJson(set.asInstanceOf[Set[Foo]])
    case barSet: Set[Bar] if typeOf[A] =:= typeOf[Bar] => Json.toJson(set.asInstanceOf[Set[Bar]])
}

The runtime error being:

Exception in thread "main" scala.MatchError: java.lang.Object (of class scala.reflect.internal.Types$ClassNoArgsTypeRef)

A workaround could be to check the instance of the first element in the Set but this seems inefficient and ugly. Could also match on the key eg fooSet or barSet but if the keys are the same name eg both called set, then this wouldn't work.

In 2.11 s there any way to get at the type/class the Set has been created with?

by the swan at December 17, 2014 03:34 PM

Does SBT use the Fast Scala Compiler (fsc)?

Does SBT make use of fsc?

For test purposes I am compiling a 500-line program on a fairly slow Ubuntu machine (Atom N270). Three successive compile times were 77s, 66s, and 66s.

I then compiled the file with fsc from the command line. Now my times were 80s, 25s, 18s. Better! That implies to me sbt is not using fsc. Am I right? If so, why doesn't it use it?

I may try getting sbt to explicitly use fsc to compile, though I am not sure I will figure out the config. Has anyone done this?

by Crosbie at December 17, 2014 03:33 PM

Passing Any as a function parameter

This code :

  val l1: List[String] = List("test")             //> l1  : List[String] = List(test)
    val l2: String = "test"                   //> l2  : String = test

  def printVal(s: Any) = {
    println(s)
  }                                               //> printVal: (s: Any)Unit

  printVal(l1)                                    //> List(test)
  printVal(l2)                                    //> test

compiles and run's as expected.

If I attempt something similar like :

  val arr: Array[((String, String), Double)] = Array((("1", "2"), 4.5))
                                                  //> arr  : Array[((String, String), Double)] = Array(((1,2),4.5))
  def printCol(arr: Array[Any]) = {
    arr.foreach { case (e, i) => println(e + "," + i) }
  }                                               //> printCol: (arr: Array[Any])Unit

  printCol(arr)

Then I receive compile time error :

type mismatch; found : Array[((String, String), Double)] required: Array[Any] Note: ((String, String), Double) <: 
 any, but class array is invariant in type t. you may wish to investigate a wildcard type such as `_ <

As [Tuple2,Double] is not a subType of Any

Can the function printCol be rewritten so that it accepts Any type as its parameter and prints the collection values ?

Something like ? :

  def printCol(arr: Array[((Any, Any) , (Any))]) = {
        arr.foreach { case (e, i) => println(e + "," + i) }
      }

by blue-sky at December 17, 2014 03:15 PM

How to use JUnit's @Rule annotation with Scala Specs2 tests?

In our project we use Scala Specs2 together with Selenium. I'm trying to implement screenshot-on-failure mechanism "in a classic way (link)" for my tests, using JUnit annotations, but, the rule doesn't called on test failure at all.

The structure of the test is as follows:

class Tests extends SpecificationWithJUnit{

      trait Context extends LotsOfStuff {
        @Rule
        val screenshotOnFailRule = new ScreenshotOnFailRule(driver)
      }

      "test to verify stuff that will fail" should {
        "this test FAILS" in new Context {
         ...
      }
}

The ScreenshotOnFailRule looks like this:

class ScreenshotOnFailRule (webDriver: WebDriver) extends TestWatcher {

  override def failed(er:Throwable, des:Description) {
    val scrFile = webDriver.asInstanceOf[TakesScreenshot].getScreenshotAs(OutputType.FILE)
    FileUtils.copyFile(scrFile, new File(s"/tmp/automation_screenshot${Platform.currentTime}.png"))
  }
}

I understand that probably it doesn't work now because the tests aren't annotated with @Test annotation. Is it possible to annotate the Specs2 tests with JUnit @Rule annotation?

by Stas S at December 17, 2014 03:13 PM

TheoryOverflow

How many maximization algorithms can we run at the same time on a simple (or super) computer? [on hold]

I have a maximization problem which consists of finding the max of $2^L$ elements. This can be done in $O(2^L)$.

This problem can be decomposed into $L$ maximization problems, where solving problem $k$ takes $O(c_k)$ ($k=1,...,L$ and $c_k$ is the computational complexity of problem $k$).

I think about running the $L$ algorithms in parallel, so that I can get all the max of all $L$ problems in $O(\max_k c_k)$. Then, it remains to compare these $L$ maximums (in $O(L)$) to get the final answer.

My Question is: What is the max. value of $L$ in practice? In other words, How many maximization algorithms can we run at the same time on a computer?

by mat at December 17, 2014 03:09 PM

Lobsters

StackOverflow

Other Scala learning tools simliar to Koans

I'm new to Scala and I've started going through scala koans:

http://www.scalakoans.org/

and i've also found

http://aperiodic.net/phil/scala/s-99/

I'm wondering if there are other excersizes that would help me learn this language.

A little background from me - I currently use python for data science and web development tasks. I'm going to be using Scala for the same types of work.

Thank you in advance

Jason

by user2386854 at December 17, 2014 03:01 PM

Halfbakery

CompsciOverflow

What problems of procedural programming does OOP solve in practice?

I have studied the book "C++ Demystified". Now I have started to read "Object-Oriented Programming in Turbo C++ first edition (1st edition)" by Robert Lafore. I do not have any knowledge of programming which is beyond these books. This book might be outdated because it's 20 years old. I do have the latest edition, I am using the old because I like it, mainly I am just studying the basic concepts of OOP used in C++ through the first edition of Lafore's book.

Lafore's book emphasizes that "OOP" is only useful for larger and complex programs. It is said in every OOP book (also in Lafore's book) that procedural paradigm is prone to errors e.g. the global data as easily vulnerable by the functions. It is said that programmer can make honest errors in procedural languages e.g. by making a function that accidentally corrupts the data.

Honestly speaking I am posting my question because I am not grasping the explaination given in this book: Object-Oriented Programming in C++ (4th Edition) I am not grasping these statements written in Lafore's book:

Object-oriented programming was developed because limitations were discovered in earlier approaches to programming.... As programs grow ever larger and more complex, even the structured programming approach begins to show signs of strain... ....Analyzing the reasons for these failures reveals that there are weaknesses in the procedural paradigm itself. No matter how well the structured programming approach is implemented, large programs become excessively complex.... ...There are two related problems. First, functions have unrestricted access to global data. Second, unrelated functions and data, the basis of the procedural paradigm, provide a poor model of the real world...

I have studied the book "dysmystified C++" by Jeff Kent, I like this book very much, in this book mostly procedural programming is explained. I do not understand why procedural(structured) programming is weak!

Lafore's book explains the concept very nicely with some good examples. Also I have grasped an intuition by reading Lafore's book that OOP is better than procedural programming but I am curious to know how exactly in practice procedural programming is weaker than OOP.

I want to see myself what are the practical problems that one would face in procedural programming, how the OOP will make the programming easier. I think I will got my answer just by reading Lafore's book contemplatively but I want to see with my own eyes the problems in the procedural code, I want to see how the OOP style code of a program removes the foregoing errors that would happen if the same program were to be written using procedural paradigm.

There are many features of OOP and I understand it is not possible for someone to explain me how all these features removes the foregoing errors that would generate by writing the code in procedural style.

So, here is my question:

Which limitations of procedural programming does OOP address and how does it effectively remove these limitations in practice?

In particular, are there examples for programs which are hard to design using the procedural paradigm but are easily designed using OOP?

P.S: Cross posted from: http://stackoverflow.com/q/22510004/3429430

by user31782 at December 17, 2014 02:53 PM

StackOverflow

Apache Spark - Dealing with Sliding Windows on Temporal RDDs

I've been working quite a lot with Apache Spark the last few months but now I have received a pretty difficult task, to compute average/minimum/maximum etcetera on a sliding window over a paired RDD where the Key component is a date tag and the value component is a matrix. So each aggregation function should also return a matrix, where for each cell the average for all of that cell in the time period is averaged.

I want to be able to say that I want the average for every 7 days, with a sliding window of one day. The sliding window movement unit is always one, and then the unit of the size of the window (so if it's every 12 weeks, the window movement unit is 1).

My initial thought now is to simply iterate, if we want an average per X days, X times, and for each time just group the elements by it's date, with an offset.

So if we have this scenario:

Days: 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15

Matrices: A B C D E F G H I J K L M N O

And we want the average per 5 days, I will iterate 5 times and show the grouping here:

First iteration:

Group 1: (1, A) (2, B) (3, C) (4, D) (5, E)

Group 2: (6, F) (7, G) (8, H) (9, I) (10, J)

Group 3: (11, K) (12, L) (13, M) (14, N) (15, O)

Second iteration:

Group 1: (2, B) (3, C) (4, D) (5, E) (6, F)

Group 2: (7, G) (8, H) (9, I) (10, J), (11, K)

Group 3: (12, L) (13, M) (14, N) (15, O)

Etcetera, and for each group, I have to do a fold/reduce procedure to get the average.

However as you might imagine, this is pretty slow and probably a rather bad way to do it. I can't really figure out any better way to do it though.

by Johan S at December 17, 2014 02:53 PM

/r/clojure

StackOverflow

How to reuse scala slick "withSession { implicit session: Session =>" code block

I have this code in scala using slick

def insertTask(task: Task) = {
  conn.dbObject withSession { implicit session: Session =>
    tasks.insert(task)
  }
}

it looks working :)

Now I'm going to have also code for readTask and I do not want to duplicate the code for withSession { implicit...

So I thought of doing this:

def doWithConn(dbConn: DBConnection, doThisCodeBlock: => Unit)(implicit session: Session) = {
  dbConn.dbObject withSession { implicit session: Session =>
    doThisCodeBlock
  }
}

and now my code looks like

def insertTask(task: Task) = {
  doWithConn(conn, tasks.insert(task)) // here i get the following complication error
}

however I get the following compilation error:

Error:(36, 34) could not find implicit value for parameter session: scala.slick.jdbc.JdbcBackend#SessionDef doWithConn(conn, tasks.insert(task)) ^

I'm not sure how to pass the session from the insertTask method. How can I pass it and fix this compilation error?

thanks

by Jas at December 17, 2014 02:44 PM

Depicting 'has-a and belongs-to' many-to-many relationships in Clojure [on hold]

I'm wondering how I can define and use a many-to-many relationship in Clojure. For example, a record in Table-A references some of the values in Table-B (These values are static). There are many other rows which should be referencing the same values in Table-B.

I use Korma for DB related ops.

by Coding active at December 17, 2014 02:38 PM

Use `@annotation.varargs` on constructors

I want to declare a class like this:

class StringSetCreate(val s: String*) {
 // ...
}

and call that in Java. The problem is that the constructor is of type

public StringSetCreate(scala.collection.Seq)

So in java, you need to fiddle around with the scala sequences which is ugly.

I know that there is the @annotation.varargs annotation which, if used on a method, generates a second method which takes the java varargs.

This annotation does not work on constructors, at least I don't know where to put it. I found a Scala Issue SI-8383 which reports this problem. As far as I understand there is no solution currently. Is this right? Are there any workarounds? Can I somehow define that second constructor by hand?

by theomega at December 17, 2014 02:26 PM

Possible to add a trait dynamically given a by name parameter [duplicate]

This question already has an answer here:

Say I have the following classes and trait:

abstract class Base {
  def foo
}

class MyClass extends Base {  
  def foo = { println("foo") }
}

trait MyTrait extends Base {
  abstract override def foo {
   println("overriding foo")
   super.foo
  } 
}

And under a certain runtime condition I want to be able to add the MyTrait when creating an instance of the class. I have a function that takes a by name parameter to create the class as below:

def someFunc(create: => Base): Base = {

  if (someCondition)
   // How to add the MyTrait? (ie would be doing a new MyClass with MyTrait)
  else
   create
}
...
someFunc(new MyClass)

...

by user79074 at December 17, 2014 02:25 PM

TheoryOverflow

Can typed lambda calculi express *all* algorithms below a given complexity?

I know that the complexity of most varieties of typed lambda calculi without the Y combinator primitive is bounded, i.e. only functions of bounded complexity can be expressed, with the bound becoming larger as the expressiveness of the type system grows. I recall that, e.g., the Calculus of Constructions can express at most doubly exponential complexity.

My question concerns whether the typed lambda calculi can express all algorithms below a certain complexity bound, or only some? E.g. are there any exponential-time algorithms not expressible by any formalism in the Lambda Cube? What is the "shape" of the complexity space which is completely covered by different vertices of the Cube?

by jkff at December 17, 2014 02:21 PM

Maximum weight "fair" matching

I'm interested in a variant of the maximum weight matching in a graph, which I call "Maximum Fair Matching".

Assume that the graph is full (i.e. $E=V\times V$), has even number of vertices, and that the weight is given by a profit function $p:{V\choose 2}\to \mathbb N$. Given a matching $M$, denote by $M(v)$ the profit of the edge $v$ is matched with.

A matching $M$ is a fair matching iff, for any two vertices $u,v\in V$: $$(\forall w\in V:\ \ p(\{w,v\})\geq p(\{w,u\}))\to M(v)\geq M(u)$$

That is, if for any vertex $w\in V$, matching $w$ to a vertex $v$ gives higher profit than matching it to a vertex $u$, a fair matching must suffice $M(v)\geq M(u)$.

Can we find a maximum weight fair matching efficiently?


An interesting case is when the graph is bipartite and the fairness only applies to one side, that is assume that $G=(L\cup R,L\times R)$, and we are given a profit function $p:L\times R\to \mathbb N$.

A Fair Bipartite Matching is a matching in $G$ such that for any two vertices $u,v\in L$: $$(\forall w\in R:\ \ p(\{v,w\})\geq p(\{u,w\}))\to M(v)\geq M(u)$$

How fast can we find a maximum weight fair bipartite matching?


The motivation for this problem comes from the bipartite special case. Assume you have $n$ workers and $m$ tasks, and worker $i$ can produce $p_{i,j}$ profit from work $j$. To problem here is to design a reasonable (in a sense workers will not feel "ripped-off''), while maximizing the total payoffs. (There is a tradeoff here between the power of the assignment mechanism and the social benefit).

If we define the social-welfare (or the factory profit) of the assignment of workers to jobs as the sum of profits.

Looking at different scenarios for the power of the job assigner, we get the following results:

  • If we are allowed to assign any worker to any job, we can optimize the factory efficiently (just find a maximal-weight matching).

  • If every worker chooses a task on his own, assuming that his work will be selected (only a single work can be selected for each job) should he be the most qualified worker that chose the task, workers will converge into the ''greedy'' equilibrium. The reason is that the worker that could earn the most ($i=\mbox{argmax}_i \max_j p_{i,j}$) will choose the most profitable job, and so on. By the approximation rate of the greedy algorithm for matching, this should give a 2-approximation of the maximal social-welfare possible.

I'm looking for something in-between. Let's assume we could assign workers to jobs, but have to promise them that no "less-qualified" worker earns more than them.

How can we find a maximal weight matching promising "fairness" to employees efficiently?

by R B at December 17, 2014 02:08 PM

StackOverflow

Inorder Traversal with return value

So I'm doing inorder traversal for a tree, for which the code goes something like this

var traversal:String = ""
def inorder(node: Node): String = {

if (node == null)
  return traversal
inorder(node.leftChild)
traversal += node.label
inorder(node.rightChild)
return traversal
}

I'm facing an issue though (a really stupid one) that when I run it for two nodes (say A and B), the value of traversal obtained while running for A is also included while getting traversal for B. Since it is a recursive function, I cannot define traversal inside the function either. Please tell how to do it.

by Pravesh Jain at December 17, 2014 02:08 PM

SecureSocial InvocationTargetException

I'm trying to get the SecureSocial scala/demo (3.0-M1-play-2.2.x) integrated in my own Play app.

I copied the Scala files from the demo and added a securesocial.conf and extended my routes file to link to the SecureSocial routes.

Compilation is fine but when I am trying to run I get the following exception:

[error] application - 

! Internal server error, for (GET) [/] ->

java.lang.reflect.InvocationTargetException: null
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) ~[na:1.8.0_25]
at     sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) ~[na:1.8.0_25]
at     sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) ~[na:1.8.0_25]
at java.lang.reflect.Constructor.newInstance(Constructor.java:408) ~[na:1.8.0_25]
at Global$$anonfun$2.apply(Global.scala:35) ~[na:na]
Caused by: java.lang.NoSuchMethodError: play.api.mvc.Results$Status.apply(Ljava/lang/Object;Lplay/api/http/Writeable;)Lplay/api/mvc/SimpleResult;
at securesocial.core.SecureSocial$class.$init$(SecureSocial.scala:46) ~[securesocial_2.10-3.0-M1-play-2.2.x.jar:3.0-M1-play-2.2.x]
at controllers.Application.<init>(Application.scala:23) ~[na:na]
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) ~[na:1.8.0_25]
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) ~[na:1.8.0_25]
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) ~[na:1.8.0_25]

Global.scla: 35 is: the getControllerInstance Method from the example the line is: _.asInstanceOf[Constructor[A]].newInstance(MyRuntimeEnvironment)

So it looks like it can not instantiate the Application Class the header of the application class looks like: class Application(override implicit val env: RuntimeEnvironment[DemoUser]) extends securesocial.core.SecureSocial[DemoUser] {

The example is working but my own project isn't, I can't figure out the problem and would be glad if someone could help me.

by user4007301 at December 17, 2014 02:05 PM

/r/compsci

Need to write a research proposal for an assignment. Tips? Ideas?

As the title says, I have to write a research proposal for an assignment that is due tomorrow (procrastination, thy name is Satan) and I have no idea what to write about. If anyone have any tips or ideas I would be super grateful.

submitted by errevs
[link] [2 comments]

December 17, 2014 02:00 PM

StackOverflow

Spark Streaming Iterative Algorithm

I want to create a Spark Streaming application coded in Scala. I want my application to:

  • read from a HDFS Text File line by line
  • analyze every line as String and if needed modify it and:
  • keep state that is needed for the analysis in some kind of data structures (Hashes probably)
  • output of everything on text files (any kind)

I've had no problems with the first step:

val lines = ssc.textFileStream("hdfs://localhost:9000/path/")

My analysis consist in searching a match in the Hashes for some fields of the String analyzed, that's why I need to maintain a state and do the process iteratively. The data in those Hashes is also extracted by the strings analyzed.

What can I do for next steps?

by GPrivi at December 17, 2014 01:48 PM

What's a workable way setup an Akka cluster in a multi-node Docker environment?

Assume the picture below. Each Docker container belongs to a single Akka cluster "foo", and each container has runs one cluster node. The IP address assigned by Docker (inside the container) is given in green. All the internal ports are 9090 but are mapped to various external ports on the host.

enter image description here

What is the Akka URI for the node in say Docker 5? Would it be akka.tcp://foo@10.0.0.195:9101

I've read some blogs on Akka and Docker that involve linking but this doesn't seem workable (?) for a multi-node deployment and I'm not sure how linking scales to 100s of nodes.

I need some way for Akka to know the address of its cluster. Left to its own devices, Docker 5 might decide it's reachable at akka.tcp://foo@192.178.1.2:9090, which is useless/unreachable outside of its own container.

At this point I'm thinking I pass the host's IP and port (e.g. 10.0.0.195:9101) to the Docker container as a parameter on start-up for Akka to use when it configures itself.

Would this work, or is there a better way to go?

by Greg at December 17, 2014 01:45 PM

/r/freebsd

Planet Theory

PC chairs for ICALP 2016

I am happy to inform you that the PC chairs for ICALP 2016 will be
Many thanks to this colleagues for their willingness to serve as PC chairs for the conference, which will be held in Rome.


by Luca Aceto (noreply@blogger.com) at December 17, 2014 01:31 PM

StackOverflow

Controller action returns "Invalid Json" when using a Fakerequest from a spec2 test

I am using playframework 2.6 and play-slick 0.8.0.

Action code:

def addCompany = Authenticated {
 DBAction(parse.json) {
   implicit rs => {
     val newCompany = rs.request.body
     val result = CompanyTable.insert(newCompany.as[Company])(rs.dbSession)

     if(result > 0)
       Ok("{\"id\":"+result+"}")
     else
       Ok("New company was not created.")
   }
 }
}

The Action is a composition of an Action that just checks for a valid session and the DBAction, which requires the request body to have a valid JSON object.

Test code:

"should create a Company from a Json request" in new InMemoryDB {

  val newCompany = Company(name = "New Company1")

  val fr = FakeRequest(POST, "/company")
    .withSession(("email", "bob@villa.com"))
    .withHeaders(CONTENT_TYPE -> "application/json")
    .withJsonBody(Json.toJson(newCompany))

  val action = controllers.CompanyController.addCompany

  val result = action(fr).run

  status(result) should be_==(OK)
  (contentAsJson(result) \ "id").as[Long] should be_>(1L)

}

The InMemoryDB class is just a FakeApplication with a pre-populated in memory database.

The issue that I am having is that when the test runs the result is always a 400 with body content containing a message saying [Invalid Json]. When I call the service using curl with the same JSON body content, it works and the id is returned.

by eeb at December 17, 2014 01:17 PM

Cannot compile openjdk7 source code on CentOS6.5

I was trying to compile openjdk source code on CentOS6.5, and I got the following error message while running make. if anybody can help? thanks in advance.

software version: JDK: openjdk-7u40-fcs-src-b43-26_aug_2013 OS: Linux 2.6.32-431.el6.x86_64

make[6]: Leaving directory /usr/local/openjdk/build/linux-amd64-debug/hotspot/outputdir/linux_amd64_compiler2/jvmg' cd linux_amd64_compiler2/jvmg && ./test_gamma Using java runtime at: /usr/lib/jvm/java-1.6.0-openjdk.x86_64/jre ./gamma: relocation error: /usr/lib/jvm/java-1.6.0-openjdk-1.6.0.33.x86_64/jre/lib/amd64/libjava.so: symbol JVM_FindClassFromCaller, version SUNWprivate_1.1 not defined in file libjvm.so with link time reference make[5]: *** [jvmg] Error 127 make[5]: Leaving directory/usr/local/openjdk/build/linux-amd64-debug/hotspot/outputdir' make[4]: * [generic_build2] Error 2 make[4]: Leaving directory /usr/local/openjdk/hotspot/make' make[3]: *** [jvmg] Error 2 make[3]: Leaving directory/usr/local/openjdk/hotspot/make' make[2]: * [hotspot-build] Error 2 make[2]: Leaving directory /usr/local/openjdk' make[1]: *** [generic_debug_build] Error 2 make[1]: Leaving directory/usr/local/openjdk'

by Bob.Z at December 17, 2014 01:10 PM

/r/compsci

QuantOverflow

Risk Budgets with Target Portfolio Volatility

I'm working through the implementation of a risk budgeting approach as described in the recent Roncalli paper. The idea is that the portfolio manager sets a contribution of total portfolio volatility to each asset in the portfolio (the budget, $b_i$ where $\sum_{i=1}^n b_i = 1$) and solves an optimization problem to find the weights ($x_i$ where $\sum_{i=1}^n x_i = 1$) of those assets that allow the assets' volatility contribution to match the set budget. A somewhat similar approach was discussed on this site here.

More formally (eq. 8):

$$x^*=\underset{x}{\arg \min} \sum_{i=1}^n (\frac{x_i(\Sigma x)_i}{\sum_{j=1}^n x_j(\Sigma x)_j} - b_i)^2$$ $$u.c. 1^Tx=1; 0\le x \le 1$$

where

  • $x$ is the weight of asset $i$
  • $n$ is the number of assets
  • $(\Sigma x)_i$ is the covariance of asset $i$ wrt to the portfolio (I think this is the interpretation, perhaps someone can confirm)
  • $b_i$ is the set risk budget for asset $i$

What I am trying to do is add to this approach the ability for the manager to set an overall target portfolio volatility in addition to the budget of each asset.

According to the paper, we know:

$$\sum_{i=1}^n RC_i(x_i,...,x_n)=\sum_{i=1}^n x_i \frac{(\Sigma x)_i}{\sqrt{x^T\Sigma x}}=\sigma(x)$$

where

  • $RC_i$ is the risk contribution ($b$) of asset $i$

Because of these relationships, I've drawn the conclusion that $\sqrt{x^T\Sigma x} = \sum_{j=1}^n x_j(\Sigma x)_j$ (portfolio volatility is the sum of each assets' volatility contribution), I thought I might insert my target portfolio volatility as the denominator of the minimization problem above. I got reasonable results in my tests but the actual portfolio volatility using the results of the optimization problem never matched the target.

After I thought about this for a while, I realized this approach is probably naive and likely wrong. Basically, because I'm using two different covariance matrixes to represent the same volatility value: the computed covariance matrix included in the numerator and the covariance matrix that is implied by a target volatility estimation.

My question is twofold:

  1. Are there papers/references available that describe the mechanics of setting asset level risk budgets as well as a portfolio level target volatility?
  2. Does anyone have an idea independent of any papers or resources how I might go about setting asset level risk budgets as well as a portfolio level target volatility?

by strimp099 at December 17, 2014 12:47 PM

How much less likely is a stop loss to be touched/hit after increasing expected return?

Firstly, let's say we have a stock ABC currently trading at $100.00 that has:
(A) an expected return of 0% per year and (B) standard deviation of 20% per year

Given these stats, the stock has a 50% chance of at least touching \$90.00 any time within one year ("the lower bound"), and 50% chance of touching \$110.00 any time within one year ("the upper bound"). Altogether, there is a 100% chance of the stock either touching \$110 or \$90 any time within one year.

Hence, a stop loss placed at \$90.00 would have a 50% chance of being hit any time within one year, and vice versa for the \$110.00 take profit target.

Now, suppose our stock ABC has an average expected return of +10% per year but the standard deviation remains the same at 20% per year.

Given the upward bias in the stock, am I correct to say that the chance of the stock touching \$90.00 is now less than 50%, and the chance of it touching \$110.00 is now more than 50%?

Also, given this new average expected return, what should be the new prices for the upper and lower bound such that the new lower bound has a 50% probability of being hit any time within one year and likewise 50% for the upper bound (for a total of 100% probability of either being hit on the upper or lower bound any time within one year)? And what is the formula used to arrive at this answer?

by Golden Goose at December 17, 2014 12:44 PM

StackOverflow

How can i do calculations on subsets , Panda's way, without looping

I have days like this:

eventday_idxs
2005-01-07 00:00:00
2005-01-31 00:00:00
2005-02-15 00:00:00
2005-04-18 00:00:00
2005-05-11 00:00:00
2005-08-12 00:00:00
2005-08-15 00:00:00
2005-09-06 00:00:00
2005-09-19 00:00:00
2005-10-12 00:00:00
2005-10-13 00:00:00
2005-10-20 00:00:00
2006-01-10 00:00:00
2006-01-30 00:00:00
2006-02-10 00:00:00
2006-03-29 00:00:00

I want to do calculations FROm : TO ranges of it like this on AAPL stock dataset. As i am beginner in Pandas i use loop and do like this.

aap1_10_years = pd.io.data.get_data_yahoo('AAPL', 
                                 start=datetime.datetime(2004, 12, 10), 
                                 end=datetime.datetime(2014, 12, 10))
one_day = timedelta(days=1)
for i,ind in enumerate(eventday_idxs):
    try:
        do_calculations(aapl_10_years[ ind: eventday_idxs[i+1] - one_day ]['High'])
    except IndexError:
        do_calculations(aapl_10_years[ ind:]['High'] )

How can i apply do_calcuations without loops like this. Loops like that is discourage in panda because slow , right?

by V3ss0n at December 17, 2014 12:30 PM

Family Polymorphism in Scala How it is working

Hi I am getting following error in the below code. Can you please explain the reason for this error?

Error - - type mismatch; found : UpperClassFamily.Mother required: StandardFamily.M (which expands to) StandardFamily.Mother

Code:

 object testworksheet {

  trait Family {
        type M <: Mother
        type F <: Father
        type C <: Child

        class Father ( val name: String ) {
            def kiss (m:M) =println ( "Showing signs of af fect ion towards " + m.name)
        }
        class Mother ( val name: String )
        class Child ( val name: String ) {
            def askForhelp (m:M) = println ( "Screeaaaaming at " + m.name)
        }
    }

    object UpperClassFamily extends Family {
      type F = Father ; type M = Mother ; type C = PoliteChild

      class Mother (name: String , val lastName : String ) extends super.Mother (name)
      class PoliteChild (name: String ) extends Child (name) {
        override def askForhelp (m:M) = println ( "Asking " + m.name + m. lastName + " for help" )
      }
    }

    object StandardFamily extends Family {
        type F = Father ; type M = Mother ; type C = Child
    }

    def assignFamily ( f : Family ) = ( )

    val father = new StandardFamily.Father("John")
    val upperClassMother = new UpperClassFamily .Mother ( "Dorthea I I I ","test" )
    father.kiss(upperClassMother)  //Error Location
}

by Anand at December 17, 2014 12:19 PM

Fred Wilson

What’s Next

I am always thinking about what is next and I feel like I’m spending even more time this year thinking about this. All of us at USV seem to be pondering this question a lot right now.

I came across this nice post by Ben Thompson in which he ponders the question out loud, which is my favorite way to ponder.

Here is the money quote:

While the introduction of the iPhone seems like it was just yesterday (at least it does to me!), we are quickly approaching seven years – about the midway point of this epoch, if the PC and Internet are any indication.4 I sense, though, that we may be moving a bit more quickly: the work/productivity and communications applications have really come into focus this year, and while the battle to see what companies ride those applications to dominance will be interesting, it’s highly likely that the foundation is being layed for the core technology of the next epoch:

Ben’s framework is roughly similar to ours but his conclusions are a bit different as follows:

1) I would substitute personal mesh for wearables

2) I would substitute the blockchain stack for bitcoin

3) I would bet on messenger as the next mobile OS over anything else. We have already seen that happen in China.

But in any case, posts like Ben’s and what comes of them (this) are super helpful. Thanks Ben.

by Fred Wilson at December 17, 2014 11:58 AM

/r/netsec

/r/compsci

simulate a reality by exploiting geometric solutions rather than analytical solutions

It will essentially be The Matrix. Except every one who plays will be God's and they get to build the world up from atomic structure. The main mechanism of action is people finding stable formations can publish them.

You get to name an object and it becomes available as a clip board item to everyone who is logged in.

The essential game dynamics are:

Spawn object

Outline area

compress/expand

speed up/slow down

It's essentially just using the same code you'd use to make a model squishy. Fluid dynamics but one dimensional.

You set boundary that is the object. It becomes a point particle if need be processing wise. You don't have to include its history once you get far enough away from it. As a god, once you know something its completely reliable unlike in our world where things change so damn much!

The basic goal is to only consider vander waals forces at first because atomics do a great deal of what we witness and we are no where near mastering the classical mechanical environment.

The idea is to get people to understand particle physics at as early an age as possible. And to allow people who know the theory of their speciality but want to simulate in order to reduce costs. We'll never have to experiment again! we can be sure of our altering of the physical.

submitted by lostminty
[link] [2 comments]

December 17, 2014 11:54 AM

StackOverflow

Implicit resolution in descendants of associated types to avoid import tax

class JavaRxObservable
class MyObservable extends JavaRxObservable

object MyObservable {
  implicit class ObservablePimp(o: JavaRxObservable) {
    def flatMapPimp: JavaRxObservable = ... 
  }
}

def api: JavaRxObservable = new MyObservable

api.flatMapPimp // is this possible without import statement ?

Note that it is not possible to create a companion object to third party type JavaRxObservable. And since my api method must return JavaRxObservable type because of its combinatory nature (MyObservable#map would return JavaRxObservable), it is a major drawback in API design because you are forcing people to read documentation that they gotta import stuff to use your api. Not even declaration in package object helps if it is meant to be API used by third parties.

by lisak at December 17, 2014 11:50 AM

Mapping A => Option[B] functions to immutable collections of As

Is there any equivalent of the following function in Scala's standard library?

def traverse[A, B](collection: List[A])(f: A => Option[B]): Option[List[B]]

traverse applies a function that can fail to an immutable list. It returns None at the first failure. It returns Some(list) if everything went fine.

Here I'm using lists, but it could be immutable hash maps for example.

by Antoine at December 17, 2014 11:36 AM

Haskell, Don't know why this has a *parse error on input ‘if’*

This is to take a number, get its factorial and double it, however because of the base case if you input 0 it gives 2 as the answer so in order to bypass it i used an if statement, but get the error parse error on input ‘if’. Really appreciate if you guys could help :)

fact :: Int -> Int
fact 0 = 1
fact n = n * fact(n-1)

doub :: Int -> Int
doub r = 2 * r

factorialDouble :: IO()
factorialDouble = do 
                    putStr "Enter a Value: "
                    x <- getLine
                    let num = (read x) :: Int
                        if (num == 0) then error "factorial of zero is 0"
                            else let y = doub (fact num) 
                                    putStrLn ("the double of factorial of " ++ x ++ " is " ++ (show y))

by Ashish Sondhi at December 17, 2014 11:31 AM

Lift beginner: How to use Box with Java null values

I am a Lift beginner and often coding things like this: I use a Java method that returns an object or null if a value was not found. So I need to check for null values:

val value = javaobject.findThing(xyz)
if(value != null) {
    value.doAnotherThing()
} else {
    warn("Value not found.")
}

Can I write this code shorter with the Box concept? I have read the Lift-Wiki-documentation about the Box concept, but I don't understand how to use it with Java null values.

by Sonson at December 17, 2014 11:07 AM

Modify fluentd json output

How can we easily transform with fluentd( and plugins ) something like this

{
    "remote": "87.85.14.126",
    "city": "saint-hubert"
}

To this:

{
   "geoip": {
       "remote": "87.85.14.126",
       "city": "saint-hubert"
   }
}

Thank you

by Florent Valdelievre at December 17, 2014 10:55 AM

/r/compsci

CompsciOverflow

The number of maximal independent sets

An independent set is a set of vertices in a graph, no two of which are adjacent. A maximal independent set is an independent set that you can not add any vertex. I want to know if the number of all maximal independent set is an exponential number.

by tounsy at December 17, 2014 10:40 AM

/r/netsec

StackOverflow

How to turn off auto-formatting in IntelliJ 14 for play routing files?

Everytime I press return to enter a new route into the file it auto-reformats the entire file which I do not want. I cannot find the setting to turn off the auto-formatting, is there one?

by toths at December 17, 2014 10:19 AM